DZone Research Report: A look at our developer audience, their tech stacks, and topics and tools they're exploring.
Getting Started With Large Language Models: A guide for both novices and seasoned practitioners to unlock the power of language models.
In the SDLC, deployment is the final lever that must be pulled to make an application or system ready for use. Whether it's a bug fix or new release, the deployment phase is the culminating event to see how something works in production. This Zone covers resources on all developers’ deployment necessities, including configuration management, pull requests, version control, package managers, and more.
DevOps Nirvana: Mastering the Azure Pipeline To Unleash Agility
How To Start With Low Coding: Expert’s Guide on Developing Business Applications With SAP Build Apps
In the dynamic realm of Android app development, efficiency is key. Enter Azure DevOps, Microsoft's integrated solution that transforms the development lifecycle. This tutorial will show you how to leverage Azure DevOps for seamless Android app development. What Is Azure DevOps? Azure DevOps is not just a version control system; it's a comprehensive set of development and deployment tools that seamlessly integrate with popular platforms and technologies. From version control (Azure Repos) to continuous integration and delivery (Azure Pipelines), and even application monitoring (Azure Application Insights), Azure DevOps offers a unified environment to manage your entire development cycle. This unified approach significantly enhances collaboration, accelerates time-to-market, and ensures a more reliable and scalable deployment of your Android applications. Azure DevOps is a game-changer in the development of feature-rich Android mobile applications, offering a unified platform for version control, continuous integration, and automated testing. With Azure Pipelines, you can seamlessly orchestrate the entire build and release process, ensuring that changes from each team member integrate smoothly. The integrated nature of Azure DevOps promotes collaboration, accelerates the development cycle, and provides robust tools for monitoring and troubleshooting. This unified approach not only helps meet tight deadlines but also ensures a reliable and scalable deployment of the Android application, enhancing the overall efficiency and success of the project. Use the azure-pipelines.yml file at the root of the repository. Get this file to build the Android application using a CI (Continuous Integration) build. Follow the instructions in the previously linked article, "Introduction to Azure DevOps," to create a build pipeline for an Android application. After creating a new build pipeline, you will be prompted to choose a repository. Select the GitHub/Azure Repository. You then need to authorize the Azure DevOps service to connect to the GitHub account. Click Authorize, and this will integrate with your build pipeline. After the connection to GitHub has been authorized, select the right repo, which is used to build the application. How To Build an Android Application With Azure Step 1: Get a Fresh Virtual Machine Azure Pipelines have the option to build and deploy using a Microsoft-hosted agent. When running a build or release pipeline, get a fresh virtual machine (VM). If Microsoft-hosted agents will not work, use a self-hosted agent, as it will act as a build host. pool: name: Hosted VS2017 demands: java Step 2: Build a Mobile Application Build a mobile application using a Gradle wrapper script. Check out the branch and repository of the gradlew wrapper script. The gradlew wrapper script is used for the build. If the agent is running on Windows, it must use the gradlew.bat; if the agent runs on Linux or macOS, it can use the gradlew shell script. Step 3: Set Directories Set the current working directory and Gradle WrapperFile script directory. steps: - task: Gradle@2 displayName: 'gradlew assembleDebug' inputs: gradleWrapperFile: 'MobileApp/SourceCode -Android/gradlew' workingDirectory: 'MobileApp/SourceCode -Android' tasks: assembleDebug publishJUnitResults: false checkStyleRunAnalysis: true findBugsRunAnalysis: true pmdRunAnalysis: true This task detects all open source components in your build, security vulnerabilities, scan libraries, and outdated libraries (including dependencies from the source code). You can view it from the build level, project level, and account level. task: whitesource.ws-bolt.bolt.wss.WhiteSource Bolt@18 displayName: 'WhiteSource Bolt' inputs: cwd: 'MobileApp/SourceCode -Android' Step 4: Copy Files Copy the .apk file from the source to the artifact directory. - task: CopyFiles@2 displayName: 'Copy Files to: $(build.artifactStagingDirectory)' inputs: SourceFolder: 'MobileApp/SourceCode -Android' Contents: '**/*.apk' TargetFolder: '$(build.artifactStagingDirectory)' Use this task in the build pipeline to publish the build artifacts to Azure pipelines and file share.it will store it in the Azure DevOps server. - task: PublishBuildArtifacts@1 displayName: 'Publish Artifact: drop' The new pipeline wizard should recognize that you already have an azure-pipelines.yml in the root repository. The azure-pipeline.yml file contains all the settings that the build service should use to build and test the application, as well as generate the output artifacts that will be used to deploy the app's later release pipeline(CD). Step 5: Save and Queue the Build After everything is perfect, save and queue the build so you can see the corresponding task of logs to the respective job. Step 6: Extract the Artifact Zip Folder After everything is done, extract the artifact zip folder, copy the .apk file into the mobile device, and install the .apk file. Conclusion Azure DevOps is a game-changer for Android app development, streamlining processes and boosting collaboration. Encompassing version control, continuous integration, and automated testing, this unified solution accelerates development cycles and ensures the reliability and scalability of Android applications. The tutorial has guided you through the process of building and deploying an Android mobile application using Azure DevOps. By following these steps, you've gained the skills to efficiently deploy Android applications, meet tight deadlines, and ensure reliability. Whether you're optimizing your workflow or entering Android development, integrating Azure DevOps will significantly enhance your efficiency and project success.
This article will demonstrate how to build a complete CI/CD pipeline in Visual Studio and deploy it to Azure using the new Continuous Delivery Extension for Visual Studio. Using CI allows you to merge the code changes in order to ensure that those changes work with the existing code base and allows you to perform testing. On the other hand, using CD, you are repeatedly pushing code through a deployment pipeline where it is built, tested, and deployed afterward. This CI/CD team practice automates the build, testing, and deployment of your application, and allows complete traceability in order to see code changes, reviews, and test results. What Is Visual Studio? Visual Studio is a powerful Integrated Development Environment (IDE). This feature-rich IDE has a robust environment for coding, debugging, and building applications. Azure DevOps (previously VS Team Services) has a comprehensive collection of collaboration tools and extensions that closely integrates the CI/CD pipeline of the Visual Studio environment. The CI (Continuous Integration) updates any code changes to the existing code base while CD (Continuous Deployment) pushes it through the deployment pipeline to build, test, and deploy further. The Visual Studio with CI/CD extensions thus automates the build, deployment, and testing process of software development. Not only that, it allows complete traceability in order to see code changes, reviews, and test results. The quality of software is largely dependent on the process applied to develop it. The automated system of The CI/CD practices is focused on this goal through continuous delivery and deployment. Consequently, this not only ensures software quality but also enhances the security and profitability of the production. This also shortens the production time to include new features, creating happy customers with low stress on development. In order to create a CI build, a release pipeline, and Release Management that is going to deploy the code into Azure, all you need is an existing web-based application and an extension from the marketplace. DZone’s previously covered how to build a CI/CD pipeline from scratch. How To Build a CI/CD Pipeline With Visual Studio Step1: Enable the Continuous Delivery Extension for Visual Studio In order to use the Continuous Delivery Tools for Visual Studio extension, you just need to enable it. The Continuous Delivery Tools for Visual Studio extension makes it simple to automate and stay up to date on your DevOps pipeline for other projects targeting Azure. The tools also allow you to improve your code quality and security. Go to Tools, and choose Extensions and Updates. From the prompted window, select Continuous Delivery Tools for Visual Studio and click Enable. *If you don't have Continuous Delivery Tools installed, go to Online Visual Studio Marketplace, search for "Continuous" and download it. Step 2: Create a Project in Team Services In this step, you are going to create a project in Team Services and put your project code there without leaving your IDE. Team Services is a tool that allows you to build Continuous Integration and Continuous Delivery. Go into the Solution Explorer, and right-click on your web-based project. Click on the new context menu Configure Continuous Delivery. A new window is displayed Configure Continuous Delivery. Click on the Add this project to source control plus button. Click on the Publish Git Repo button located in the Publish to Visual Studio Team Services section in Team Explorer. Your Microsoft Account is automatically fetched from your IDE. Also is displayed the Team Services Domain which will be used and your Repository Name. Click on the Publish Repository button in order to create a project in Team Services. After the synchronization is finished you will see that your project is created in the Team Explorer. Now your project is created into the Team Services account (the source code is uploaded, there is a Git Repository and it is generating a continuous delivery pipeline automatically). 7. In the output window, you can see that your CI/CD is set up for your project. 8. After a while, you are going to get 3 different links: Link to the build Link to the release Link to the assets created in Azure which is going to be the target for your deployment (application service) Step 3: Open the Project in Team Services A build definition is the entity through which you define your automated build process. In the build definition, you compose a set of tasks, each of which performs a step in your build. Choose the Build Definition link provided in the Output window and copy. Paste it into a browser in order to open the project containing your application in Team Services. The summary for the build definition is displayed. You can see that the build is already running. Click on the build link. It is shown as an output of your build server which is running your build automatically. Click on the Edit build definition. Add an additional task. Customize the tasks that are already there. Step 4: Test Assemblies Task Each task has a Version selector that enables you to specify the major version of the task used in your build or deployment. When a new minor version is released (for example, 1.2 to 1.3), your build or release will automatically use the new version. However, if a new major version is released (for example, 2.0), your build or release will continue to use the major version you specified until you edit the definition and manually change to the new major version. Click on the Test Assemblies. You can see a little flag icon which means that a new preview version of this task is available. Click on the Flag Icon and choose version 2* in order to preview. There are several new items shown for the Test Assemblies. One of them is Run only impacted tests. This is an item that allows tools to analyze which lines of code were changed against the tests that were run in the past and you will know which tests execute which lines of code (you will not have to run all of your tests: you are able to run only the tests that were impacted by the changes). Run tests in parallel on multi-core machines is an item that allows your tests to run in such a way as to use all the cores you have available. Using this item you will effectively increase the number of tests running at the same time, which will reduce the time to run all the tests. Step 5: Add an Additional Task A task is the building block for defining automation in a build definition, or in an environment of a release definition. A task is simply a packaged script or procedure that has been abstracted with a set of inputs. There are some built-in tasks in order to enable fundamental build and deployment scenarios. Click on the Add Task plus button in order to create a new additional task. An enormous list of tasks is displayed that can be run out of the box allowing you to target any language/platform (Chef support, CocoaPods, Docker, Node.js, Java). If you want to install another feature or extension that is not listed, simply click on the link Check out our Marketplace which is displayed above the list of tasks. Step 6: Setting Encrypted and Non-Encrypted Variables Variables are a great way to store and share key bits of data in your build definition. Some build templates automatically define some variables for you. Go and click on the second tab named Variables (next to the tab Tasks). Click on the padlock located next to the variable value, in order to encrypt it. After encrypting, the value of the variable is displayed with asterisks, and no one can see this value except the person who encrypted it. Step 7: Turn on the Continuous Integration (CI) Trigger On the Triggers tab, you specify the events that will trigger the build. You can use the same build definition for both CI and scheduled builds. Go and click on the third tab named Triggers, where you can set up your Continuous Integration. Enabling the box Disable this trigger means that this build will run automatically whenever someone checks in code or, in other words, when a new version of the source artifacts is available. Step 8: Build Definition Options If the build process fails, you can automatically create a work item to track getting the problem fixed. You can specify the work item type. You can also select if you want to assign the work item to the requestor. For example, if this is a CI build, and a team member checks in some code that breaks the build, then the work item is assigned to that person. Go and click on the fourth tab named Options. Enable the box Create Work Item on Failure. CI builds are supposed to build at every check-in, and if some of them fail because the developer made an error, you can automatically create a work item in order to track getting the problem fixed. Default agent queue option is displayed in the second half of the Options. In the drop-down list are all available pools: Default (if your team uses private agents set up by your own) Hosted (Windows-based machine, if your team uses VS2017 or VS2015) Hosted Linux Preview (if your team uses development tools on Ubuntu) Hosted VS2017 (if your team uses Visual Studio 2017) Step 9: Build Summary You can see the summary of the build - in other words, everything that happened during the build - following the next steps: Code coverage All work items and tasks Deployments Step 10: Release Definition A release definition is one of the fundamental concepts in Release Management for VSTS and TFS. It defines the end-to-end release process for an application to be deployed across various environments. Remember that you as a developer, never have to leave VS in order to deploy the application from VS into Azure. A release definition is displayed that deployed the code into Azure. Click on the three dots located next to the particular release definition. From the displayed context menu, select Edit. Series of environments Tasks that you want to perform in each environment Step 11: Check if the Application Is Really Deployed From Visual Studio Into Azure Microsoft Azure is a cloud computing service for building, testing, deploying, and managing applications and services through a global network of Microsoft-managed data centers. In this step you will verify if your web application is deployed in Azure, following the next steps: Go to your Azure portal. Click on the Resource Groups. Search for the "demo." Click In the search results on your web project "e2edemo." Open the web application link. Further Reading: Release pipeline using Azure DevOps. Conclusion Continuous Integration is a software development practice in which you build and test software every time a developer pushes code to the application. Continuous Delivery is a software engineering approach in which Continuous Integration, automated testing, and automated deployment capabilities allow software to be developed and deployed rapidly, reliably, and repeatedly with minimal human intervention. High-performing teams usually practice Continuous Integration (CI) and Continuous Delivery (CD). VSTS not only automates the build, testing, and deployment of your application, but it gives you complete traceability to see everything in the build including changes to your code, reviews, and test results, as a tool which is fully supporting DevOps practices.
In this brief demonstration, we’ll set up and run three instances of WildFly on the same machine (localhost). Together they will form a cluster. It’s a rather classic setup, where the appservers need to synchronize the content of their application’s session to ensure failover if one of the instances fails. This configuration guarantees that, if one instance fails while processing a request, another one can pick up the work without any data loss. Note that we’ll use a multicast to discover the members of the cluster and ensure that the cluster’s formation is fully automated and dynamic. Install Ansible and Its Collection for WildFly On a Linux system using a package manager, installing Ansible is pretty straightforward: Shell sudo dnf install ansible-core Please refer to the documentation available online for installation on other operating systems. Note that this demonstration assumes you are running both the Ansible controller and the target (same machine in our case) on a Linux system. However, it should work on any other operating system with a few adjustments. Before going further, double-check that you are running a recent enough version of Ansible (2.14 or above will do, but 2.9 is the bare minimum): Shell ansible [core 2.15.3] config file = /etc/ansible/ansible.cfg configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python3.11/site-packages/ansible ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections executable location = /usr/bin/ansible python version = 3.11.2 (main, Jun 6 2023, 07:39:01) [GCC 8.5.0 20210514 (Red Hat 8.5.0-18)] (/usr/bin/python3.11) jinja version = 3.1.2 libyaml = True The next and last step to ready your Ansible environment is to install the Ansible collection for WildFly on the controller (the machine that will run Ansible): Shell # ansible-galaxy collection install middleware_automation.wildfly Starting galaxy collection install process Process install dependency map Starting collection install process Downloading https://galaxy.ansible.com/api/v3/plugin/ansible/content/published/collections/artifacts/middleware_automation-wildfly-1.4.3.tar.gz to /root/.ansible/tmp/ansible-local-355dkk9kf5/tmpc2qtag11/middleware_automation-wildfly-1.4.3-9propr_x Downloading https://galaxy.ansible.com/api/v3/plugin/ansible/content/published/collections/artifacts/ansible-posix-1.5.4.tar.gz to /root/.ansible/tmp/ansible-local-355dkk9kf5/tmpc2qtag11/ansible-posix-1.5.4-pq0cq2mn Installing 'middleware_automation.wildfly:1.4.3' to '/root/.ansible/collections/ansible_collections/middleware_automation/wildfly' middleware_automation.wildfly:1.4.3 was installed successfully Installing 'ansible.posix:1.5.4' to '/root/.ansible/collections/ansible_collections/ansible/posix' Downloading https://galaxy.ansible.com/api/v3/plugin/ansible/content/published/collections/artifacts/middleware_automation-common-1.1.4.tar.gz to /root/.ansible/tmp/ansible-local-355dkk9kf5/tmpc2qtag11/middleware_automation-common-1.1.4-nks7pvy7 ansible.posix:1.5.4 was installed successfully Installing 'middleware_automation.common:1.1.4' to '/root/.ansible/collections/ansible_collections/middleware_automation/common' middleware_automation.common:1.1.4 was installed successfully Set up the WildFly Cluster For simplicity’s sake and to allow you to reproduce this demonstration on a single machine (physical or virtual) or even a container, we opted to deploy our three instances on one target. We chose localhost as a target, so that the demonstration can even be performed without a remote host. There are essentially two steps to set up the WildFly cluster: Install WildFly on the targeted hosts (here, just localhost). This means downloading the archive from this website and decompressing the archive in the appropriate directory (JBOSS_HOME). These tasks are handled by the wildfly_install role supplied by Ansible collection for WildFly. Create the configuration files to run several instances of WildFly. Because we’re running multiple instances on a single host, you also need to ensure that each instance has its own subdirectories and set of ports, so that the instances can coexist and communicate. Fortunately, this functionality is provided by a role within the Ansible collection called wildfly_systemd. Ansible Playbook To Install WildFly Here is the playbook we’ll use to deploy our clusters. Its content is relatively self-explanatory, at least if you are somewhat familiar with the Ansible syntax. YAML - name: "WildFly installation and configuration" hosts: "{{ hosts_group_name | default('localhost') }" become: yes vars: wildfly_install_workdir: '/opt/' wildfly_config_base: standalone-ha.xml wildfly_version: 30.0.1.Final wildfly_java_package_name: java-11-openjdk-headless.x86_64 wildfly_home: "/opt/wildfly-{{ wildfly_version }" instance_http_ports: - 8080 - 8180 - 8280 app: name: 'info-1.2.war' url: 'https://drive.google.com/uc?export=download&id=13K7RCqccgH4zAU1RfOjYMehNaHB0A3Iq' collections: - middleware_automation.wildfly roles: - role: wildfly_install tasks: - name: "Set up for WildFly instance {{ item }." ansible.builtin.include_role: name: wildfly_systemd vars: wildfly_config_base: 'standalone-ha.xml' wildfly_instance_id: "{{ item }" instance_name: "wildfly-{{ wildfly_instance_id }" wildfly_config_name: "{{ instance_name }.xml" wildfly_basedir_prefix: "/opt/{{ instance_name }" service_systemd_env_file: "/etc/wildfly-{{ item }.conf" service_systemd_conf_file: "/usr/lib/systemd/system/wildfly-{{ item }.service" loop: "{{ range(0,3) | list }" - name: "Wait for each instance HTTP ports to become available." ansible.builtin.wait_for: port: "{{ item }" loop: "{{ instance_http_ports }" - name: "Checks that WildFly server is running and accessible." ansible.builtin.get_url: url: "http://localhost:{{ port }/" dest: "/opt/{{ port }" loop: "{{ instance_http_ports }" loop_control: loop_var: port In short, this playbook first uses the Ansible collection for WildFly to install the appserver by using the wildfly_install role. This will download all the artifacts, create the required system groups and users, install dependency (unzip), and so on. At the end of its execution, all the tidbits required to run WildFly on the target host are installed, but the server is not yet running. That’s what happening in the next step. In the tasks section of the playbook, we then call on another role provided by the collection: wildfly_systemd. This role will take care of integrating WildFly as a regular system service into the service manager. Here, we use a loop to ensure that we create not one, but three different services. Each one will have the same configuration (standalone-ha.xml) but run on different ports, using a different set of directories to store its data. Run the Playbook! Now, let’s run our Ansible playbook and observe its output: Shell $ ansible-playbook -i inventory playbook.yml PLAY [WildFly installation and configuration] ********************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Validating arguments against arg spec 'main'] *** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Ensure prerequirements are fullfilled.] *** included: /root/.ansible/collections/ansible_collections/middleware_automation/wildfly/roles/wildfly_install/tasks/prereqs.yml for localhost TASK [middleware_automation.wildfly.wildfly_install : Validate credentials] **** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Validate existing zipfiles wildfly-30.0.1.Final.zip for offline installs] *** skipping: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Validate patch version for offline installs] *** skipping: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Validate existing additional zipfiles {{ eap_archive_filename } for offline installs] *** skipping: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Check that required packages list has been provided.] *** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Prepare packages list] *** skipping: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Add JDK package java-11-openjdk-headless.x86_64 to packages list] *** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Install required packages (5)] *** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Ensure required local user exists.] *** included: /root/.ansible/collections/ansible_collections/middleware_automation/wildfly/roles/wildfly_install/tasks/user.yml for localhost TASK [middleware_automation.wildfly.wildfly_install : Check arguments] ********* ok: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Set wildfly group] ******* ok: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Ensure group wildfly exists.] *** changed: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Ensure user wildfly exists.] *** changed: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Ensure workdir /opt/ exists.] *** changed: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Ensure archive_dir /opt/ exists.] *** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Ensure server is installed] *** included: /root/.ansible/collections/ansible_collections/middleware_automation/wildfly/roles/wildfly_install/tasks/install.yml for localhost TASK [middleware_automation.wildfly.wildfly_install : Check arguments] ********* ok: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Check local download archive path] *** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Set download paths] ****** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Check target archive: /opt//wildfly-30.0.1.Final.zip] *** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Retrieve archive from website: https://github.com/wildfly/wildfly/releases/download] *** included: /root/.ansible/collections/ansible_collections/middleware_automation/wildfly/roles/wildfly_install/tasks/install/web.yml for localhost TASK [middleware_automation.wildfly.wildfly_install : Check arguments] ********* ok: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Download zipfile from https://github.com/wildfly/wildfly/releases/download/30.0.1.Final/wildfly-30.0.1.Final.zip into /work/wildfly-30.0.1.Final.zip] *** changed: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Retrieve archive from RHN] *** skipping: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Install server using RPM] *** skipping: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Check downloaded archive] *** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Copy archive to target nodes] *** changed: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Check target archive: /opt//wildfly-30.0.1.Final.zip] *** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Verify target archive state: /opt//wildfly-30.0.1.Final.zip] *** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Read target directory information: /opt/wildfly-30.0.1.Final] *** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Extract files from /opt//wildfly-30.0.1.Final.zip into /opt/.] *** changed: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Note: decompression was not executed] *** skipping: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Read information on server home directory: /opt/wildfly-30.0.1.Final] *** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Check state of server home directory: /opt/wildfly-30.0.1.Final] *** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Set instance name] ******* ok: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Deploy custom configuration] *** skipping: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Deploy configuration] **** changed: [localhost] TASK [Apply latest cumulative patch] ******************************************* skipping: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Ensure required parameters for elytron adapter are provided.] *** skipping: [localhost] TASK [Install elytron adapter] ************************************************* skipping: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Install server using Prospero] *** skipping: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Check wildfly install directory state] *** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Validate conditions] ***** ok: [localhost] TASK [Ensure firewalld configuration allows server port (if enabled).] ********* skipping: [localhost] TASK [Set up for WildFly instance {{ item }.] ********************************* TASK [middleware_automation.wildfly.wildfly_systemd : Validating arguments against arg spec 'main'] *** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Check arguments] ********* ok: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Check current EAP patch installed] *** skipping: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Check arguments for yaml configuration] *** skipping: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Check if YAML configuration extension is supported in WildFly] *** skipping: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Check if YAML configuration extension is supported in EAP] *** skipping: [localhost] TASK [Ensure required local user and group exists.] **************************** TASK [middleware_automation.wildfly.wildfly_install : Check arguments] ********* ok: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Set wildfly group] ******* ok: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Ensure group wildfly exists.] *** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Ensure user wildfly exists.] *** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Set destination directory for configuration] *** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Set instance destination directory for configuration] *** skipping: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Check arguments] ********* ok: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Set base directory for instance] *** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Check arguments] ********* ok: [localhost] => { "changed": false, "msg": "All assertions passed" } TASK [middleware_automation.wildfly.wildfly_systemd : Set instance name] ******* ok: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Set instance name] ******* skipping: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Set bind address] ******** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Create basedir /opt/wildfly-00 for instance: wildfly-0] *** changed: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Create deployment directories for instance: wildfly-0] *** changed: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Deploy custom configuration] *** skipping: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Deploy configuration] **** changed: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Include YAML configuration extension] *** skipping: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Check YAML configuration is disabled] *** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Set systemd envfile destination] *** skipping: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Determine JAVA_HOME for selected JVM] *** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Determine JAVA_HOME for selected JVM] *** skipping: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Set systemd unit file destination] *** skipping: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Deploy service instance configuration: /etc/wildfly-0.conf] *** changed: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Deploy Systemd configuration for service: /usr/lib/systemd/system/wildfly-0.service] *** changed: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Perform daemon-reload to ensure the changes are picked up] *** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Ensure service is started] *** included: /root/.ansible/collections/ansible_collections/middleware_automation/wildfly/roles/wildfly_systemd/tasks/service.yml for localhost TASK [middleware_automation.wildfly.wildfly_systemd : Check arguments] ********* ok: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Set instance wildfly-0 state to started] *** changed: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Validating arguments against arg spec 'main'] *** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Check arguments] ********* ok: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Check current EAP patch installed] *** skipping: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Check arguments for yaml configuration] *** skipping: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Check if YAML configuration extension is supported in WildFly] *** skipping: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Check if YAML configuration extension is supported in EAP] *** skipping: [localhost] TASK [Ensure required local user and group exists.] **************************** TASK [middleware_automation.wildfly.wildfly_install : Check arguments] ********* ok: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Set wildfly group] ******* ok: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Ensure group wildfly exists.] *** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Ensure user wildfly exists.] *** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Set destination directory for configuration] *** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Set instance destination directory for configuration] *** skipping: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Check arguments] ********* ok: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Set base directory for instance] *** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Check arguments] ********* ok: [localhost] => { "changed": false, "msg": "All assertions passed" } TASK [middleware_automation.wildfly.wildfly_systemd : Set instance name] ******* ok: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Set instance name] ******* skipping: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Set bind address] ******** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Create basedir /opt/wildfly-11 for instance: wildfly-1] *** changed: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Create deployment directories for instance: wildfly-1] *** changed: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Deploy custom configuration] *** skipping: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Deploy configuration] **** changed: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Include YAML configuration extension] *** skipping: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Check YAML configuration is disabled] *** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Set systemd envfile destination] *** skipping: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Determine JAVA_HOME for selected JVM] *** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Determine JAVA_HOME for selected JVM] *** skipping: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Set systemd unit file destination] *** skipping: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Deploy service instance configuration: /etc/wildfly-1.conf] *** changed: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Deploy Systemd configuration for service: /usr/lib/systemd/system/wildfly-1.service] *** changed: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Perform daemon-reload to ensure the changes are picked up] *** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Ensure service is started] *** included: /root/.ansible/collections/ansible_collections/middleware_automation/wildfly/roles/wildfly_systemd/tasks/service.yml for localhost TASK [middleware_automation.wildfly.wildfly_systemd : Check arguments] ********* ok: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Set instance wildfly-1 state to started] *** changed: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Validating arguments against arg spec 'main'] *** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Check arguments] ********* ok: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Check current EAP patch installed] *** skipping: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Check arguments for yaml configuration] *** skipping: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Check if YAML configuration extension is supported in WildFly] *** skipping: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Check if YAML configuration extension is supported in EAP] *** skipping: [localhost] TASK [Ensure required local user and group exists.] **************************** TASK [middleware_automation.wildfly.wildfly_install : Check arguments] ********* ok: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Set wildfly group] ******* ok: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Ensure group wildfly exists.] *** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_install : Ensure user wildfly exists.] *** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Set destination directory for configuration] *** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Set instance destination directory for configuration] *** skipping: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Check arguments] ********* ok: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Set base directory for instance] *** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Check arguments] ********* ok: [localhost] => { "changed": false, "msg": "All assertions passed" } TASK [middleware_automation.wildfly.wildfly_systemd : Set instance name] ******* ok: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Set instance name] ******* skipping: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Set bind address] ******** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Create basedir /opt/wildfly-22 for instance: wildfly-2] *** changed: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Create deployment directories for instance: wildfly-2] *** changed: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Deploy custom configuration] *** skipping: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Deploy configuration] **** changed: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Include YAML configuration extension] *** skipping: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Check YAML configuration is disabled] *** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Set systemd envfile destination] *** skipping: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Determine JAVA_HOME for selected JVM] *** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Determine JAVA_HOME for selected JVM] *** skipping: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Set systemd unit file destination] *** skipping: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Deploy service instance configuration: /etc/wildfly-2.conf] *** changed: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Deploy Systemd configuration for service: /usr/lib/systemd/system/wildfly-2.service] *** changed: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Perform daemon-reload to ensure the changes are picked up] *** ok: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Ensure service is started] *** included: /root/.ansible/collections/ansible_collections/middleware_automation/wildfly/roles/wildfly_systemd/tasks/service.yml for localhost TASK [middleware_automation.wildfly.wildfly_systemd : Check arguments] ********* ok: [localhost] TASK [middleware_automation.wildfly.wildfly_systemd : Set instance wildfly-2 state to started] *** changed: [localhost] TASK [Wait for each instance HTTP ports to become available.] ****************** ok: [localhost] => (item=8080) ok: [localhost] => (item=8180) ok: [localhost] => (item=8280) TASK [Checks that WildFly server is running and accessible.] ******************* changed: [localhost] => (item=8080) changed: [localhost] => (item=8180) changed: [localhost] => (item=8280) PLAY RECAP ********************************************************************* localhost : ok=105 changed=26 unreachable=0 failed=0 skipped=46 rescued=0 ignored=0 Note that the playbook is not that long, but it does a lot for us. It performs almost 100 different tasks, starting by automatically installing the dependencies, including the JVM required by WildFly, along with downloading its binaries. The wildfly_systemd role does even more, effortlessly setting up three distinct services, each with its own set of ports and directory layout to store instance-specific data. Even better, the WildFly installation is NOT duplicated. All of the binaries live under the /opt/wildfly-27.0.1 directory, but all the data files of each instance are stored in separate folders. This means that we just need to update the binaries once, and then restart the instances to deploy a patch or upgrade to a new version of WildFly. On top of everything, we configured the instances to use the standalone-ha.xml configuration as the baseline, so they are already set up for clustering. Check That Everything Works as Expected The easiest way to confirm that the playbook did indeed install WildFly and start three instances of the appserver is to use the systemctl command to check the associate services state: Shell # systemctl status wildfly-0 ● wildfly-0.service - JBoss EAP (standalone mode) Loaded: loaded (/usr/lib/systemd/system/wildfly-0.service; enabled; vendor preset: disabled) Active: active (running) since Thu 2024-01-18 07:01:44 UTC; 5min ago Main PID: 884 (standalone.sh) Tasks: 89 (limit: 1638) Memory: 456.3M CGroup: /system.slice/wildfly-0.service ├─ 884 /bin/sh /opt/wildfly-30.0.1.Final/bin/standalone.sh -c wildfly-0.xml -b 0.0.0.0 -bmanagement 127.0.0.1 -Djboss.bind.address.private=127.0.0.1 -Djboss.default.multicast.address=230.0.0.4 -Djboss.server.config.dir=/opt/wildfly-30.0.1.Final/standalone/configuration/ -Djboss.server.base.dir=/opt/wildfly-00 -Djboss.tx.node.id=wildfly-0 -Djboss.socket.binding.port-offset=0 -Djboss.node.name=wildfly-0 -Dwildfly.statistics-enabled=false └─1044 /etc/alternatives/jre_11/bin/java -D[Standalone] -Djdk.serialFilter=maxbytes=10485760;maxdepth=128;maxarray=100000;maxrefs=300000 -Xmx1024M -Xms512M --add-exports=java.desktop/sun.awt=ALL-UNNAMED --add-exports=java.naming/com.sun.jndi.ldap=ALL-UNNAMED --add-exports=java.naming/com.sun.jndi.url.ldap=ALL-UNNAMED --add-exports=java.naming/com.sun.jndi.url.ldaps=ALL-UNNAMED --add-exports=jdk.naming.dns/com.sun.jndi.dns=ALL-UNNAMED --add-opens=java.base/com.sun.net.ssl.internal.ssl=ALL-UNNAMED --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.lang.invoke=ALL-UNNAMED --add-opens=java.base/java.lang.reflect=ALL-UNNAMED --add-opens=java.base/java.io=ALL-UNNAMED --add-opens=java.base/java.net=ALL-UNNAMED --add-opens=java.base/java.security=ALL-UNNAMED --add-opens=java.base/java.util=ALL-UNNAMED --add-opens=java.base/java.util.concurrent=ALL-UNNAMED --add-opens=java.management/javax.management=ALL-UNNAMED --add-opens=java.naming/javax.naming=ALL-UNNAMED -Dorg.jboss.boot.log.file=/opt/wildfly-00/log/server.log -Dlogging.configuration=file:/opt/wildfly-30.0.1.Final/standalone/configuration/logging.properties -jar /opt/wildfly-30.0.1.Final/jboss-modules.jar -mp /opt/wildfly-30.0.1.Final/modules org.jboss.as.standalone -Djboss.home.dir=/opt/wildfly-30.0.1.Final -Djboss.server.base.dir=/opt/wildfly-00 -c wildfly-0.xml -b 0.0.0.0 -bmanagement 127.0.0.1 -Djboss.bind.address.private=127.0.0.1 -Djboss.default.multicast.address=230.0.0.4 -Djboss.server.config.dir=/opt/wildfly-30.0.1.Final/standalone/configuration/ -Djboss.server.base.dir=/opt/wildfly-00 -Djboss.tx.node.id=wildfly-0 -Djboss.socket.binding.port-offset=0 -Djboss.node.name=wildfly-0 -Dwildfly.statistics-enabled=false Jan 18 07:01:47 7c4a5dd056d1 standalone.sh[1044]: 07:01:47,090 INFO [org.jboss.modcluster] (ServerService Thread Pool -- 84) MODCLUSTER000032: Listening to proxy advertisements on /224.0.1.105:23364 Jan 18 07:01:47 7c4a5dd056d1 standalone.sh[1044]: 07:01:47,148 INFO [org.wildfly.extension.undertow] (MSC service thread 1-4) WFLYUT0006: Undertow HTTPS listener https listening on [0:0:0:0:0:0:0:0]:8443 Jan 18 07:01:47 7c4a5dd056d1 standalone.sh[1044]: 07:01:47,149 INFO [org.jboss.as.ejb3] (MSC service thread 1-3) WFLYEJB0493: Jakarta Enterprise Beans subsystem suspension complete Jan 18 07:01:47 7c4a5dd056d1 standalone.sh[1044]: 07:01:47,183 INFO [org.jboss.as.connector.subsystems.datasources] (MSC service thread 1-2) WFLYJCA0001: Bound data source [java:jboss/datasources/ExampleDS] Jan 18 07:01:47 7c4a5dd056d1 standalone.sh[1044]: 07:01:47,246 INFO [org.jboss.as.server.deployment.scanner] (MSC service thread 1-2) WFLYDS0013: Started FileSystemDeploymentService for directory /opt/wildfly-00/deployments Jan 18 07:01:47 7c4a5dd056d1 standalone.sh[1044]: 07:01:47,285 INFO [org.jboss.ws.common.management] (MSC service thread 1-5) JBWS022052: Starting JBossWS 7.0.0.Final (Apache CXF 4.0.0) Jan 18 07:01:47 7c4a5dd056d1 standalone.sh[1044]: 07:01:47,383 INFO [org.jboss.as.server] (Controller Boot Thread) WFLYSRV0212: Resuming server Jan 18 07:01:47 7c4a5dd056d1 standalone.sh[1044]: 07:01:47,388 INFO [org.jboss.as] (Controller Boot Thread) WFLYSRV0060: Http management interface listening on http://127.0.0.1:9990/management Jan 18 07:01:47 7c4a5dd056d1 standalone.sh[1044]: 07:01:47,388 INFO [org.jboss.as] (Controller Boot Thread) WFLYSRV0051: Admin console listening on http://127.0.0.1:9990 Jan 18 07:01:47 7c4a5dd056d1 standalone.sh[1044]: 07:01:47,390 INFO [org.jboss.as] (Controller Boot Thread) WFLYSRV0025: WildFly Full 30.0.1.Final (WildFly Core 22.0.2.Final) started in 2699ms - Started 311 of 708 services (497 services are lazy, passive or on-demand) - Server configuration file in use: wildfly-0.xml Deploy an Application to the WildFly Cluster Now, our three WildFly are running, but the cluster has yet to form. Indeed, with no apps, there is no reason for the cluster to exist. Let’s modify our Ansible playbook to deploy a simple application to all instances; this will allow us to check that the cluster is working as expected. To achieve this, we’ll leverage another role provided by the WildFly collection: wildfly_utils. In our case, we will use the jboss_cli.yml task file, which encapsulates the running of JBoss command-line interface (CLI) queries: YAML … post_tasks: - name: "Ensures webapp {{ app.name } has been retrieved from {{ app.url }." ansible.builtin.get_url: url: "{{ app.url }" dest: "{{ wildfly_install_workdir }/{{ app.name }" - name: "Deploy webapp" ansible.builtin.include_role: name: wildfly_utils tasks_from: jboss_cli.yml vars: jboss_home: "{{ wildfly_home }" query: "'deploy --force {{ wildfly_install_workdir }/{{ app.name }'" jboss_cli_controller_port: "{{ item }" loop: - 9990 - 10090 - 10190 Now, we will once again execute our playbook so that the web application is deployed on all instances. Once the automation is completed successfully, the deployment will trigger the formation of the cluster. Verify That the WildFly Cluster Is Running and the App Is Deployed You can verify the cluster formation by looking at the log files of any of the three instances: Shell … 2022-12-23 15:02:08,252 INFO [org.infinispan.CLUSTER] (thread-7,ejb,jboss-eap-0) ISPN000094: Received new cluster view for channel ejb: [jboss-eap-0] (3) [jboss-eap-0, jboss-eap-1, jboss-eap-2] … Using the Ansible Collection as an Installer for WildFly Last remark: while the collection is designed to be used inside a playbook, you can also use the provided playbook to directly install Wildfly: Shell $ ansible-playbook -i inventory middleware_automation.wildfly.playbook Conclusion Here you go: with a short and simple playbook, we have fully automated the deployment of a WildFly cluster! This playbook can now be used against one, two, three remote machines, or even hundreds of them! I hope this post will have been informative and that it’ll have convinced you to use Ansible to set up your own WildFly servers!
Cloud technology changes the way we reason about troubleshooting, maintaining, and deploying systems. We used to view hardware and software as separate entities. Different teams managed these entities with different priorities, using disparate methodologies. But that’s no longer the case; teams work together to improve resiliency and maximize agility. There’s no better example of this than Infrastructure as Code (IaC). Let’s take a deep dive into IaC. We'll examine how it makes your systems more reliable, manageable, and reproducible. What Is Infrastructure as Code (IaC)? Infrastructure as Code (IaC) is maintaining and configuring hardware with the same methods as software. Instead of configuring computers and network hardware manually, you use automated tools and version control (VCS) to manage their code. For many teams, the final step is adding continuous integration and continuous delivery/deployment (CI/CD) pipelines. When you deploy your systems with this, you've adopted the entire IaC stack. Benefits of Using IaC Infrastructure as Code (IaC) is a practice that uses machine-readable scripts to automate the provisioning and management of computing infrastructure. It has several important benefits: IaC makes your infrastructure consistent and reproducible. When you define your systems in code and manage that code in VCS, it’s easy to recreate and deploy them. You can easily build the same infrastructure many times, reducing the risk of human error. It also makes it easier to fall back to an old version of your infrastructure when something goes wrong. When you run your IaC tools in your CI/CD, your infrastructure is scalable and agile. You can bundle new releases with their infrastructure changes and scale systems up and down with your pipelines. Your teams can share, review, and track IaC using the same tools you use for the rest of your software, like Git. This makes it easier for your teams to collaborate on software and infrastructure projects. Declarative vs Imperative Programming IaC supports both declarative and imperative coding constructs. Learning these approaches helps you pick the right tool for your infrastructure. Declarative As the name implies, declarative coding declares the desired state of an object. So in IaC, you use a declarative tool to define the state of your systems and it handles the details. Two popular declarative IaC tools are Terraform and Puppet. Imperative Imperative programming uses step-by-step instructions to complete a task. It’s the coding style used by languages like Python and Go. In imperative IaC, you define the actions required to bring a system to its desired state. Chef is an example of an imperative IaC tool, while Ansible combines imperative and declarative approaches in its Domain Specific Language (DSL). When you create your infrastructure, you can choose between using immutable or mutable approaches. Mutable vs Immutable Infrastructure Immutable IaC If something is immutable, that means you can't change it. So, if you need to update a setting or add something to it, you must create a new one and replace the outdated copy. Docker containers are immutable. If you want to preserve a container's state across sessions, you need to externalize it. This is often done by connecting it to a persistent filesystem. DZone’s previously covered how engineering teams can approach container security. Kubernetes, since it is based on immutable containers, treats its applications as immutable, too. Deployments entail creating new container sets. Learn how to setup a CI/CD pipeline with Kubernetes. Terraform treats most infrastructure as immutable. When you apply a configuration change it will create a new instance and destroy the outdated one. Related Tutorial: How to build docker images using Jenkins pipelines. Advantages At first glance, this approach seems slow and wasteful. But, like immutable programming, immutable infrastructure presents several important advantages. Consistency — Immutable infrastructure, by definition, stays the way you created it. It remains consistent. You can easily restore or rebuild infrastructure with your IaC tools, too. Auditability — With IaC and immutable infrastructure, your source files are an accurate audit trail for the state of your systems. Fewer errors — When you combine immutability with IaC, you only change infrastructure via code. So, policies like pull requests and audit trails reduce errors. Properly implemented IaC reduces mistakes. Disadvantages But like any other methodology, there are disadvantages you need to weigh against the benefits. Deployment time — The time you need to deploy systems increases with their number and complexity. No small fixes — Deployments for immutable infrastructure are all or nothing; even a small fix requires a complete deployment. Higher resource utilization — Most deployments involve deploying a new instance, followed by a cutover from the old to the new. Depending on the system, this can require significant system resources. Mutable IaC Mutable is the exact opposite of immutable. If something is mutable, that means you can update it after you create it. For instance, if you need to change the amount of memory in a cloud instance, you can apply the change to your existing system. Cloud virtual machines like Amazon Elastic Compute systems are mutable by default. Unlike containers, you can reconfigure them without creating new ones. Updating operating systems via package managers like apt and dnf are examples of mutable infrastructure. Ansible and Chef are often used as mutable IaC tools, using their playbooks and cookbooks to update system configurations based on programmatic instructions. Advantages Mutable infrastructure has several important advantages over an immutable approach. You can update your infrastructure quickly. For example, you can apply the latest security patches to your systems as a discrete operation. Mutable deployments are not all or nothing. You can tailor the scope and timing of updates to individual systems and applications. This lowers the risk of each deployment and simplifies scheduling. Deployment times are not linked to the size and complexity of your systems. Disadvantages The flexibility offered by mutable infrastructure does come at a cost, though. Unlike with immutable tools, your mutable IaC code represents the changes it applied rather than the complete system state. This makes audits more difficult. Mutable systems are prone to configuration drift. Mutable deployments may be more complex than immutable and pose a greater risk of failure. The choice between immutable and mutable IaC depends on your specific requirements and the nature of your infrastructure. Many organizations choose an integrated approach, making some systems mutable and others immutable. Open Source IaC Tools Open-source IAC tools are an important part of the IaC community. These tools harness the power of community-driven development, flexibility, and extensibility. They're excellent choices for automating your infrastructure provisioning and management. Terraform is a tool for describing cloud infrastructure in declarative code. It supports multiple cloud platforms and services. So, you can use it to provision and manage resources across different providers in a consistent and reproducible manner. In Terraform, infrastructure objects are immutable. OpenTofu is a fork of Terraform, created in response to HashiCorp’s decision to switch Terraform to the Business Source License (BUSL). It’s compatible with Terraform but, as a fork, will head in a different direction with a different feature set. Ansible is an automation tool with a declarative language for defining system configurations. Although, you can use procedural code for many operations, too. It treats infrastructure as mutable, managing it for its entire lifecycle. It’s useful for both configuration management and infrastructure provisioning. One of Ansible’s biggest advantages is it works with Secure Shell instead of an agent. Puppet is an agent-based tool for automating infrastructure configuration and management. It uses a declarative language. One advantage of Puppet’s agent is you can “push” changes to a system on demand, or the agent can “pull” updates on a schedule. Chef is another agent-based IaC tool. It operates similarly to Puppet but employs imperative “cookbooks” to describe infrastructure. They contain the steps required to build and configure your infrastructure. Its Ruby-based DSL is more complicated than Puppet’s, but some users prefer it for configuration management. SaltStack is like Puppet and Chef in that it works with an agent-based model. However, it uses an event-driven model for quickly distributing changes to managed systems. SaltStack uses a declarative DSL to describe system states. Conclusion IaC leverages code and coding practices for defining and managing infrastructure resources. It makes it easy to add scalability, consistency, and efficiency to your systems deployment. This means your team can deploy infrastructure configurations faster and with fewer errors. Terraform, Ansible, Puppet, Chef, and SaltStack are just a few examples of tools that can help streamline your infrastructure provisioning and management. Embrace the power of Infrastructure as Code and revolutionize the way you deploy and manage your systems.
GitHub is a popular development platform that allows developers to collaborate on projects, manage code repositories, and track changes made to files. Widely used in the software development industry, it offers a range of solutions and features to help enhance and streamline the development process. In this guide, we will outline a few key GitHub features and how to utilize them effectively. 1. Repository Management Create a repository: Start a new project by creating a repository on GitHub. Use the web interface or Git commands to initialize the repository. Clone a repository: Clone an existing repository to your local machine to start working on it. Branching: Create branches to develop new features or fix bugs without affecting the main codebase. Use the `git branch` and `git checkout` commands to create and switch between branches. Pull requests: Propose changes to the main codebase by submitting a pull request. Collaborators can review the changes, leave comments, and merge them into the main branch if they are approved. 2. Version Control Commit changes: Use the `git add` command to stage changed files and the `git commit` command to commit them to the local repository with a descriptive message. Revert changes: If you want to discard changes or revert to a previous state, use the `git revert` or `git reset` commands. Merge branches: Merge changes from one branch into another using the `git merge` command or pull request on GitHub. 3. Collaboration Fork a repository: Make a personal copy of a project to freely experiment with changes without affecting the original repository. You can contribute back by submitting a pull request. Collaborators: Add team members as collaborators to your repository, allowing them to contribute and manage the project. Issues: Use the issue tracker to report bugs, suggest enhancements, or discuss ideas within a repository. Assign labels, milestones, and assignees to better organize and track issues. Wiki and documentation: Write project documentation and maintain a wiki for better collaboration and dissemination of knowledge. 4. Continuous Integration and Deployment GitHub Actions: Automate your development workflows using GitHub Actions. Define custom workflows in YAML to build, test, and deploy your code. Integration with other tools: GitHub provides seamless integration with tools like Jira, Slack, and Jenkins, allowing you to streamline your development process. Deployments: Use GitHub's deployment features to deploy code to servers or cloud platforms. This can be done manually or automated using CI/CD pipelines. 5. Code Review Code review requests: Request peers or collaborators to review your code changes in a pull request. View and address their comments for better overall code quality. Code review guidelines: Establish guidelines for code reviews to standardize the process and ensure that code is thoroughly reviewed and tested. 6. Security and Monitoring Security scanning: Enable security features on your repositories to identify vulnerabilities and security issues in your code. GitHub provides automated security scanning tools. Dependabot: GitHub's Dependabot automatically checks for vulnerabilities in your project's dependencies and creates pull requests to update them. 7. Community Engagement GitHub Pages: Host your project's documentation or website for free using GitHub Pages. Utilize custom domain names and various templates to showcase your work. Discussions: Create and participate in discussions related to projects, ideas, or community topics within the GitHub ecosystem. Open-source contributions: Contribute to open-source projects hosted on GitHub. Fork the repository, make changes, and submit a pull request to get involved in the community. Conclusion Using GitHub features and solutions effectively can significantly improve collaboration, code quality, and project management in the software development process. By mastering the tools available, developers can make the most out of their GitHub experience and streamline their workflow.
The deployment of modern applications now relies heavily on containerization in the fast-paced world of software development. Thanks to Docker, a leading containerization platform, applications can be packaged and distributed more easily in portable, isolated environments. This comprehensive guide will walk you through the crucial steps of setting up networking, managing storage, running containers, and installing Docker. Let us establish a shared understanding of a few basic concepts before we delve into the finer points of Docker. What Is Docker? Applications and their dependencies can be packaged into small, portable containers using the Docker platform. Containers are closed environments that contain all the components required to run an application, including libraries, runtime, code, and system tools. This method ensures consistency between the development, testing, and production environments. Why Use Docker? Portability: Docker containers can run on any platform that supports Docker, ensuring consistent behavior across different environments. Isolation: Containers provide strong isolation, preventing conflicts between applications and their dependencies. Efficiency: Containers share the host OS kernel, reducing overhead and enabling rapid startup and scaling. DevOps-friendly: Docker simplifies the deployment pipeline, making it easier to build, test, and deploy applications. Now that we understand why Docker is essential, let’s proceed with the installation process. Installing Docker Linux Installation Installing Docker on a Linux-based system is straightforward, but the exact steps may vary depending on your distribution. Here’s a general guide: Update Package Repository: sudo apt update Install Dependencies: udo apt install -y apt-transport-https ca-certificates curl software-properties-common Add Docker Repository: curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg echo "deb [signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null Install Docker: sudo apt update sudo apt install docker-ce Start and Enable Docker: sudo systemctl start docker sudo systemctl enable docker macOS Installation Docker Desktop for macOS offers a convenient way to run Docker on your Mac: Download Docker Desktop: Visit the Docker website and download Docker Desktop for macOS. Install Docker Desktop: Run the installation package and follow the on-screen instructions. Windows Installation Similar to macOS, Docker Desktop for Windows simplifies Docker installation: Download Docker Desktop: Visit the Docker website and download the Docker Desktop for Windows. Install Docker Desktop: Run the installation package and follow the on-screen instructions. Running Your First Container With Docker successfully installed, let’s run your first container: Open a Terminal (Linux/macOS) or Command Prompt (Windows). Pull and Run “Hello World” Container: docker run hello-world Docker will automatically pull the “hello-world” image from Docker Hub and create a container. You’ll see a message confirming that your installation appears to be working correctly. You’ve just run your first container! Now, let’s explore how Docker handles storage. Working With Storage in Docker Docker offers several options for managing storage, allowing you to persist data between container runs and share data between containers. Creating a Persistent Volume Docker volumes are the recommended way to persist data: Create a Volume: docker volume create my_volume Run a Container with the Volume: docker run -d -v my_volume:/data my_image Data stored in the /data directory inside the container will be saved in the my_volume Docker volume. This ensures that data remains intact even if the container is removed or recreated. Mounting Host Directories Alternatively, you can mount directories from your host machine into a container: Run a Container with Host Directory Mount: docker run -d -v /path/on/host:/path/in/container my_image Replace /path/on/host with the path to the directory on your host machine and /path/in/container with the desired path inside the container. Changes made in the container directory will be reflected in the host directory. Now that you understand how Docker handles storage let’s delve into networking. Networking With Docker Docker provides various networking options to facilitate communication between containers and external networks. We’ll start with the basics. Bridge Networking By default, Docker uses bridge networking to create a private internal network for containers on the same host. Containers can communicate with each other using their container names: Run Two Containers with Bridge Networking: docker run -d --name container1 my_image docker run -d --name container2 my_imag Containers can communicate with each other using their container names as hostnames. For example, container1 can reach container2 via http://container2. Creating Custom Networks Custom networks allow you to isolate containers or manage their communication more effectively: Create a Custom Network: docker network create my_network Run Containers in the Custom Network: docker run -d --name container1 --network my_network my_image docker run -d --name container2 --network my_network my_image Containers in the my_network network can communicate with each other directly, using their container names as hostnames. Advanced Docker Networking While bridge and custom networks are suitable for many use cases, Docker provides advanced networking features for more complex scenarios: Overlay Networks: Facilitate communication between containers across multiple hosts. Macvlan Networks: Assign containers unique MAC addresses, making them appear as physical devices on the network. Host Networks: Use the host’s network stack directly, eliminating network isolation.The choice of network type depends on your specific requirements. For detailed information on these advanced networking features, refer to the Docker documentation. Conclusion Docker is a game-changing technology that makes deploying applications easier, encourages consistency between environments, and improves development and operations workflows. You have now gained knowledge of how to install Docker, run containers, control storage, and configure networking. Continue learning about Docker’s robust ecosystem, which includes Docker Compose for managing multi-container applications, Docker Swarm and Kubernetes for managing containers, and Docker Hub for exchanging and distributing container images, if you want to become an expert in Docker. Your development and deployment processes can be modernized with Docker, improving your applications' dependability, scalability, and portability. Good luck containerizing!
Artificial Intelligence (AI) and Machine Learning (ML) have revolutionized the way we approach problem-solving and data analysis. These technologies are powering a wide range of applications, from recommendation systems and autonomous vehicles to healthcare diagnostics and fraud detection. However, deploying and managing ML models in production environments can be a daunting task. This is where containerization comes into play, offering an efficient solution for packaging and deploying ML models. In this article, we'll explore the challenges of deploying ML models, the fundamentals of containerization, and the benefits of using containers for AI and ML applications. The Challenges of Deploying ML Models Deploying ML models in real-world scenarios presents several challenges. Traditionally, this process has been cumbersome and error-prone due to various factors: Dependency hell: ML models often rely on specific libraries, frameworks, and software versions. Managing these dependencies across different environments can lead to compatibility issues and version conflicts. Scalability: As the demand for AI/ML services grows, scalability becomes a concern. Ensuring that models can handle increased workloads and auto-scaling as needed can be complex. Version control: Tracking and managing different versions of ML models is crucial for reproducibility and debugging. Without proper version control, it's challenging to roll back to a previous version or track the performance of different model iterations. Portability: ML models developed on one developer's machine may not run seamlessly on another's. Ensuring that models can be easily moved between development, testing, and production environments is essential. Containerization Fundamentals Containerization addresses these challenges by encapsulating an application and its dependencies into a single package, known as a container. Containers are lightweight and isolated, making them an ideal solution for deploying AI and ML models consistently across different environments. Key containerization concepts include: Docker: Docker is one of the most popular containerization platforms. It allows you to create, package, and distribute applications as containers. Docker containers can run on any system that supports Docker, ensuring consistency across development, testing, and production. Kubernetes: Kubernetes is an open-source container orchestration platform that simplifies the management and scaling of containers. It automates tasks like load balancing, rolling updates, and self-healing, making it an excellent choice for deploying containerized AI/ML workloads. Benefits of Containerizing ML Models Containerizing ML models offer several benefits: Isolation: Containers isolate applications and their dependencies from the underlying infrastructure. This isolation ensures that ML models run consistently, regardless of the host system. Consistency: Containers package everything needed to run an application, including libraries, dependencies, and configurations. This eliminates the "it works on my machine" problem, making deployments more reliable. Portability: Containers can be easily moved between different environments, such as development, testing, and production. This portability streamlines the deployment process and reduces deployment-related issues. Scalability: Container orchestration tools like Kubernetes enable auto-scaling of ML model deployments, ensuring that applications can handle increased workloads without manual intervention. Best Practices for Containerizing AI/ML Models To make the most of containerization for AI and ML, consider these best practices: Version control: Use version control systems like Git to track changes to your ML model code. Include version information in your container images for easy reference. Dependency management: Clearly define and manage dependencies in your ML model's container image. Utilize virtual environments or container images with pre-installed libraries to ensure reproducibility. Monitoring and logging: Implement robust monitoring and logging solutions to gain insights into your containerized AI/ML applications' performance and behavior. Security: Follow security best practices when building and deploying containers. Keep container images up to date with security patches and restrict access to sensitive data and APIs. Case Studies Several organizations have successfully adopted containerization for AI/ML deployment. One notable example is Intuitive, which leverages containers and Kubernetes to manage its machine-learning infrastructure efficiently. By containerizing ML models, Intuitive can seamlessly scale its Annotations engine to millions of users while maintaining high availability. Another example is Netflix, which reported a significant reduction in deployment times and resource overheads after adopting containers for their recommendation engines. Conclusion While containerization offers numerous advantages, challenges such as optimizing resource utilization and minimizing container sprawl persist. Additionally, the integration of AI/ML with serverless computing and edge computing is an emerging trend worth exploring. In conclusion, containerization is a powerful tool for efficiently packaging and deploying ML models. It addresses the challenges associated with dependency management, scalability, version control, and portability. As AI and ML continue to shape the future of technology, containerization will play a pivotal role in ensuring reliable and consistent deployments of AI-powered applications. By embracing containerization, organizations can streamline their AI/ML workflows, reduce deployment complexities, and unlock the full potential of these transformative technologies in today's rapidly evolving digital landscape.
Achieving agility, scalability, efficiency, and security is paramount in modern software development. While several cultural methodologies, tools, and approaches are sought after to achieve the above-mentioned, GitOps, Kubernetes, and Platform Engineering are keystones of this transformation. In this comprehensive guide, you will learn what GitOps, Kubernetes, and Platform Engineering are, unraveling their significance, working principles, and what makes this trio the powerhouse of modern DevOps. Revolutionizing Infrastructure Management With GitOps Understanding GitOps GitOps is a methodology that centers around the use of version control systems, with Git being the primary choice as the singular source of truth for both application code and infrastructure configurations. GitOps encourages the declaration of the desired state of applications and infrastructure within Git repositories. This approach makes it effortless to track changes, maintain version history, and foster seamless collaboration among team members. Furthermore, the use of pull requests and code reviews in GitOps ensures high code quality and security. Whenever changes are made to the Git repositories, automated processes ensure that the system's state remains aligned with the declared configuration. How GitOps Works To embark on a GitOps journey, you should follow a series of fundamental steps: Setting up a Git Repository: Begin by creating a Git repository to house your application code and configuration files. This repository serves as the nucleus of your GitOps workflow. Provisioning a Kubernetes Cluster: Establish a Kubernetes cluster to oversee your infrastructure. Kubernetes provides the orchestration necessary for efficient application deployment and scaling. Utilizing Declarative Manifests: Define Kubernetes manifests that describe the desired state of your infrastructure and applications. These declarative configurations serve as the blueprint for your environment. Automation in Action: Commit your changes to Git, and witness automation in full swing. Container Orchestration at Its Best With Kubernetes Understanding Kubernetes Kubernetes, often abbreviated as K8s, stands as a container orchestration platform that has revolutionized the deployment and management of containerized applications. Kubernetes excels in orchestrating containers, automating deployment, scaling applications, and managing their lifecycle. It provides a robust and scalable foundation for modern, cloud-native applications. Key features of Kubernetes include: Container Orchestration: Kubernetes efficiently manages the deployment, scaling, and operation of application containers. Service Discovery and Load Balancing: It offers built-in solutions for routing traffic to containers, ensuring high availability. Auto-Scaling: Kubernetes can dynamically adjust the number of running containers based on traffic and resource requirements. Declarative Configuration: Like GitOps, Kubernetes operates on a declarative model where you specify the desired state, and Kubernetes aligns the actual state with it. Resiliency and Disaster Recovery: Kubernetes also plays a crucial role in ensuring the resiliency of applications and supports disaster recovery strategies. How Kubernetes Works To harness the capabilities of Kubernetes, follow these steps: Setting up a Kubernetes Cluster: Similar to GitOps, you require a Kubernetes cluster to oversee your infrastructure. Leveraging Declarative Manifests: Kubernetes operates on the concept of the desired state. Define your desired infrastructure and application state through declarative manifests. Containerized Deployment: Employ containers to package and deploy your applications. Containers guarantee consistency and portability. Monitoring and Management: Utilize Kubernetes tools to monitor and manage your applications. Features like load balancing, auto-scaling, and self-healing are readily available. Platform Engineering: Paving the Way to Efficiency Platform engineering revolves around creating a platform model that enables swift and reliable application development. Emerging from the evolution of DevOps and the rise of cloud-native technologies, platform engineering aims to prevent teams from reinventing the wheel by addressing shared problems and creating a well-paved road. Its focus is on providing developers with the right tools and environment to facilitate their best work, thereby reducing friction and increasing efficiency. The Role of Platform Engineering Platform Engineering involves designing, building, and maintaining platforms that support the development and delivery of applications. These platforms typically include infrastructure, services, and tools that empower development teams to create and deploy software efficiently. Platform engineers play a pivotal role in ensuring that the underlying infrastructure and tools are optimized for development and deployment. Critical responsibilities of platform engineers include: Infrastructure Management: Platform engineers oversee the management of infrastructure, whether it's on-premises or in the cloud. This includes provisioning, scaling, and maintaining servers, databases, and networking components. Automation: They automate processes to eliminate manual tasks and enhance efficiency. This includes creating CI/CD pipelines, configuring monitoring, and setting up disaster recovery mechanisms. Security and Compliance: Platform engineers implement security measures and ensure that the platform complies with relevant regulations and industry standards. Collaboration and Continuous Learning: Platform engineers also collaborate closely with development teams and are committed to continuous learning to keep up with rapidly evolving technology. Why Platform Engineering? Platform engineering offers several benefits: Increased Developer Velocity: It significantly boosts developer velocity while enhancing the reliability of applications and infrastructure. Reduced Friction: Platform engineering reduces friction by providing developers with the necessary tools and environment. Increased Efficiency and Cost-Effectiveness: By addressing shared problems, platform engineering increases efficiency, streamlines development, and often leads to better resource utilization and cost savings. How to Implement Platform Engineering To make effective use of Platform Engineering, follow these steps: Set up a Platform Engineering Team: Similar to GitOps and Kubernetes, you require a dedicated team to manage your infrastructure and applications. Define Your Strategy: Create a clear strategy outlining the desired state of your infrastructure and applications. Provide the Right Tools: Equip developers with the tools and environment they need to excel in their work. Efficiency and Collaboration: Focus on reducing friction, enhancing efficiency, and promoting collaboration by addressing shared problems. Continuous Feedback and Iteration: Emphasize the importance of continuous feedback and agile iteration in your strategy for platform engineering. The Power of GitOps, Kubernetes, and Platform Engineering in the SDLC When GitOps, Kubernetes, and Platform Engineering come together, they form a powerful combination with several key advantages: Declarative Infrastructure: GitOps and Kubernetes both rely on a declarative approach. With GitOps, you declare the desired state in Git repositories, while Kubernetes uses YAML configuration files to specify the application's desired shape and infrastructure. This synergy ensures that your infrastructure and applications are well-defined and automatically managed. Automation and Self-Healing: Automation is at the core of both GitOps and Kubernetes. Platform engineers can design CI/CD pipelines that automatically deploy and update applications. Kubernetes ensures that the desired application state is always maintained. When combined, these technologies lead to greater automation and the ability to self-heal, reducing the need for manual intervention and minimizing downtime. Scalability and Portability: Kubernetes excels at scaling applications up or down based on traffic and resource requirements. This inherent scalability, combined with GitOps practices, allows for easy scaling of both infrastructure and applications. You can handle increased loads without significant disruptions or complex manual adjustments. Consistency and Version Control: GitOps enforces the use of version control for both code and infrastructure. This provides a transparent and traceable history of changes, making it easier to audit and roll back when necessary. With its declarative configuration model, Kubernetes ensures that the desired state is consistently applied across all environments. Collaboration and Visibility: GitOps promotes transparency and collaboration through Git repositories. Team members can work together, review changes, and track the evolution of infrastructure and application configurations. Kubernetes enables teams to work with containers consistently, fostering collaboration and streamlining deployment. Security and Compliance: Platform engineers can establish security and compliance measures within the Kubernetes environment. GitOps, with its audit trail and precise version control, supports these efforts. Together, they ensure that infrastructure and applications are deployed securely and comply with necessary regulations. Conclusion: Which Is the Best for Your Business? In conclusion, GitOps, Kubernetes, and Platform Engineering are the driving forces behind modern software development. These principles enhance developer velocity, streamline workflows, and foster efficiency and reliability. By embracing these concepts, developers, managers, and project owners can seamlessly navigate the intricacies of modern DevOps and cloud-native technologies, unlocking the full potential of their software development endeavors.
Kubernetes has significantly simplified the management and operation of containerized applications. However, as these applications grow in complexity, there is an increasing need for more sophisticated deployment management tools. This is where Helm becomes invaluable. As a Kubernetes package manager, Helm greatly streamlines and simplifies deployment processes. In this article, we will delve deeply into Helm and explore how it facilitates the easier management of Kubernetes deployments. The Challenges of Kubernetes Deployments Kubernetes is fantastic for automating the deployment and management of containerized apps. It's great for running microservices and any other stateless applications. However, managing deployments becomes a big challenge as your Kubernetes system gets more extensive and complicated. Here are the issues: Configuration Confusion: Managing configurations for different apps and services can get messy. It's even more complicated when you have different environments like development, staging, and production. Keeping Track of Versions: It takes a lot of work to track your apps' different versions and configurations. This can lead to mistakes and confusion. Dealing With Dependencies: When your apps get complex, they depend on other things. Making sure these dependencies are set up correctly takes time. Doing It Again and Again: Repeating deployments on different clusters or environments is a big job and can lead to mistakes. Introducing Helm Helm is often called the "app store" for Kubernetes because it makes handling deployments easy. Here's how Helm works: Charts: In Helm, a package of pre-set Kubernetes resources is called a "chart." A chart is a set of files that explains a group of Kubernetes resources. These resources include services, deployments, config maps, and more. Templates: Helm uses templates to create Kubernetes resources within a chart. These templates let you change how your app works. You can customize your deployments for different environments. Repositories: Charts are stored in "repositories," like app stores for Helm charts. You can use public ones or make your own private store. Managing helm deployments: Helm looks after "releases" and special chart deployments. This means you can track which version of a chart you deployed and what settings you used. The Advantages of Helm Helm has some significant advantages when it comes to handling Kubernetes deployments: Reusing Charts: You can share and reuse charts within your organization. This stops you from doing the same work repeatedly and ensures your deployments are consistent. Keeping Track of Versions: Helm helps you follow different versions of your apps and their setups. This is important for keeping your deployments stable and the same every time. Customization: Helm charts are very flexible. You can use values and templates to adjust your setup for different environments. Handling Dependencies: Helm sorts out dependencies quickly. If your app relies on other things, Helm will ensure they're set up and work correctly. Going Back in Time: Helm makes returning to an older app version easy, reducing downtime and stopping problems. Robust Support Network: Helm has a significant and active community. This means you can find and use charts made by other organizations. This saves you time when deploying common apps. Helm in Action Let's look at how Helm helps with deploying a web app, step by step: 1. Creating a Chart: First, you make a Helm chart for your web app. The chart has templates for the web server, the database, and other parts needed. 2. Changing the Setup: You use Helm's values and templates to change how your web app works. For example, you can say how many copies you want, the database connection, and which environment to use (like development or production). 3. Installation: With just one command, you install your web app using the Helm chart. Helm sets up everything your app needs based on the chart and your changes. 4. Upgrades: Change the chart version or values when updating your app. Helm will update your app with little work. Challenges and Important Points Even though Helm is great, you need to remember some things: Safety in Deployments: Ensure Helm deployments are secure, especially in multi-user environments, by implementing proper access controls and security practices. Best Practices: Focus on mastering the creation of Helm charts with best practices, ensuring efficient, reliable, and maintainable deployments. Dependency Management: Manage dependencies in Helm charts with careful consideration, including thorough testing and validation to avoid conflicts and issues. Chart Updates: Keep Helm charts regularly updated to benefit from the latest security patches, performance improvements, and new features. How Atmosly Integrates Helm? Atmosly's integration with Helm brings to the forefront a dynamic marketplace that makes deploying applications to Kubernetes smoother. This powerful feature provides a centralized hub for discovering and deploying a wide range of Helm charts. From popular open-source helm charts to private applications that are templated using helm, users can easily navigate and select the necessary helm charts to deploy applications across various clusters with ease without having to take care of access and permissions. Atmosly’s Marketplace Features The marketplace is thoughtfully designed to cater to both public and private chart repositories, enabling teams to maintain a catalog of their own custom charts while also leveraging the vast repository of community-driven Helm charts. This dual capability ensures users can quickly adapt to different project requirements without leaving the Atmosly platform. The user-friendly interface of the marketplace displays an array of Helm charts, categorized for easy access, whether they are maintained by Atmosly, managed by users, or provided by third-party entities like Bitnami. Teams can deploy tools and applications, such as Apache, Elasticsearch, or custom enterprise solutions, straight into their Kubernetes environment with a simple click. By seamlessly integrating public and private Helm charts into a unified deployment experience, Atmosly's marketplace facilitates a level of agility and control that is essential for modern DevOps teams. It represents a strategic move towards simplifying complex deployment tasks, reducing the potential for error, and accelerating the journey from development to production. Wrapping Up Helm is an excellent tool for handling Kubernetes deployments. It makes things easy, even for complex apps and setups. You can have better, more stable, and customizable Kubernetes deployments using Helm's features. As Kubernetes keeps growing, Helm remains an essential tool to simplify and improve the deployment process. If you still need to look at Helm, it's time to see how it can help you with your Kubernetes management.
I recently created a small DSL that provided state-based object validation, which I required for implementing a new feature. Multiple engineers were impressed with its general usefulness and wanted it available for others to leverage via our core platform repository. As most engineers do (almost) daily, I created a pull request: 16 classes/435 lines of code, 14 files/644 lines of unit tests, and six supporting files. Overall, it appeared fairly straightforward – the DSL is already being used in production – though I expected small changes as part of making it shareable. Boy, was I mistaken! The pull request required 61 comments and 37 individual commits to address (appease) the two reviewers’ concerns, encompassing approximately ten person-hours of effort before final approval. By a long stretch, the most traumatizing PR I’ve ever participated in! What was achieved? Not much, in all honesty, as the requested changes were fairly niggling: variable names, namespace, exceptions choice, lambda usage, unused parameter. Did the changes result in cleaner code? Perhaps slightly, did remove comment typos. Did the changes make the code easier to understand? No, believe it is already fairly easy to understand. Were errors, potential errors, race conditions, or performance concerns identified? No. Did the changes affect the overall design, approach, or implementation? Not at all. That final question is most telling: for the time spent, nothing useful was truly achieved. It’s as if the reviewers were shaming me for not meeting their vision of perfect code, yet comments and the code changes made were ultimately trivial and unnecessary. Don’t misinterpret my words: I believe code reviews are necessary to ensure some level of code quality and consistency. However, what are our goals, are those goals achievable, and how far do we need to take them? Every engineer’s work is impacted by what they view as important in their work: remember, Hello World has been implemented in uncountable different ways, all correct and incorrect, depending on your personal standards. My conclusion: Perfect code is unattainable; understandable and maintainable code is much more useful to an organization. Code Reviews in the Dark Ages Writing and reviewing code was substantially different in the not-so-distant past when engineers debated text editors (Emacs, thank you very much) when tools such as Crucible, Collaborator, or GitHub were a gleam in their creators’ eyes, when software development was not possible on laptops when your desktop was plugged into a UPS to prevent inadvertent losses—truly the dark ages. Back then, code reviews were IRL and analog: schedule a meeting, print out the code, and gather to discuss the code as a group. Most often, we started with higher-level design docs, architectural landmarks, and class models, then dove deeper into specific areas as overall understanding increased. Line-by-line analysis was not the intention, though critical or complicated areas might require detailed analysis. Engineers focus on different properties or areas of the code, therefore ensuring diversity of opinions, e.g., someone with specific domain knowledge makes sure the business rules, as she understands them, are correctly implemented. The final outcome is a list of TODOs for the author to ponder and work on. Overall, a very effective process for both junior and senior engineers, allowing a forum to share ideas, provide feedback, learn what others are doing, ensure standard adherence, and improve overall code quality. Managers also learn more about their team and team dynamics, such as who speaks up, who needs help to grow, who is technically not pulling their weight, etc. However, it’s time-consuming and expensive to do regularly and difficult to not take personally: it is your code, your baby, being discussed, and it can feel like a personal attack. I’ve had peers who refused to do reviews because they were afraid it would affect their year-end performance reviews. But there’s no other choice: DevOps is decades off, test-driven development wasn’t a thing, and some engineers just can’t be trusted (which, unfortunately, remains true today). Types of Pull Requests Before digging into the possible reasons for tech debt, let’s identify what I see as the basic types of pull requests that engineers create: Bug Fixes The most prevalent type – because all code has bugs – is usually self-contained within a small number of files. More insidious bugs often require larger-scale changes and, in fact, may indicate more fundamental problems with the implementation that should be addressed. Mindless Refactors Large-scale changes to an existing code base, almost exclusively made by leveraging your IDE: name changes (namespace, class, property, method, enum values), structural changes (i.e., moving classes between namespaces), class/method extraction, global code reformatting, optimizing Java imports, or other changes that are difficult when attempted manually. Reviewers often see almost-identical changes across dozens – potentially hundreds – of files and require trust that the author did not sneak something else in, intentionally or not. Thoughtful Refactors The realization that the current implementation is already a problem or is soon to become one, and you’ll be dealing with the impact for some time to come. It may be as simple as centralizing some business logic that had been cut and pasted multiple times or as complicated as restructuring code to avoid endless conditional checks. In the end, you hope that everything functions as it originally did. Feature Enhancements Pull requests are created as the code base evolves and matures to support modified business requirements, growing usage, new deployment targets, or something else. The quantity of changes can vary widely based on the impact of the change, especially when tests are substantially affected. Managing the release of the enhancements with feature flags usually requires multiple rounds of pull requests, first to add the enhancements and then to remove the previously implemented and supporting feature flags. New Features New features for an existing application or system may require adding code to an existing code base (i.e., new classes, methods, properties, configuration files, etc.) or an entirely new code base (i.e., a new microservice in a new source code repository). The number of pull requests required and their size varies widely based on the complexity of the feature and any impact on existing code. Greenfield Development An engineer’s dream: no existing code to support and maintain, no deprecation strategies required to retire libraries or API endpoints, no munged-up data to worry about. Very likely, the tools, tech stack, and deployment targets change. Maybe it’s the organization’s first jump into truly cloud-native software development. Engineers become the proverbial kids in a candy store, pushing the envelope to see what – if any – boundaries exist. Greenfield development PRs are anything and everything: architectural, shared libraries, feature work, infrastructure-as-code, etc. The feature work is often temporary because supporting work still needs to be completed. Where’s The Beef Context? The biggest disadvantage of pull requests is understanding the context of the change, technical or business context: you see what has changed without necessarily explaining why the change occurred. Almost universally, engineers review pull requests in the browser and do their best to understand what’s happening, relying on their understanding of tech stack, architecture, business domains, etc. While some have the background necessary to mentally grasp the overall impact of the change, for others, it’s guesswork, assumptions, and leaps of faith….which only gets worse as the complexity and size of the pull request increases. [Recently a friend said he reviewed all pull requests in his IDE, greatly surprising me: first I’ve heard of such diligence. While noble, that thoroughness becomes a substantial time commitment unless that’s your primary responsibility. Only when absolutely necessary do I do this. Not sure how he pulls it off!] Other than those good samaritans, mostly what you’re doing is static code analysis: within the change in front of you, what has changed, and does it make sense? You can look for similar changes (missing or there), emerging patterns that might drive refactoring, best practices, or others doing similar. The more you know about the domain, the more value you can add; however, in the end, it’s often difficult to understand the end-to-end impact. Process Improvement As I don’t envision a return of in-person code reviews, let’s discuss how the overall pull request process can be reviewed: Goals: Aside from working on functional code, what is the team’s goal for the pull request? Standards adherence? Consistency? Reusability? Resource optimization? Scalability? Be explicit on what is important and what is a trifle. Automation: Anything automated reduces reviewers’ overall responsibilities. Static code analysis (i.e., Sonar, PMD) and security checking (i.e., Synk, Mend) are obvious, but may also include formatting code, applying organization conventions, or approving new dependencies. If possible, the automation is completed prior to engineers being asked for their review. Documentation: Provide an explanation – any explanation – of what’s happening: at times, even the most obvious seems to need minor clarifications. Code or pull request comments are ideal as they’re easily found: don’t expect a future maintainer to dissect the JIRA description and reverse-engineer (assuming today it’s even valid). List external dependencies and impacts. Unit and API tests also assist. Helpful clarifications, not extensive line-by-line explanations. Design Docs: The more fundamental or impactful the changes are, the more difficult – and necessary – to get a common understanding across engineers. Not implying full-bore UML modeling, but enough to convey meaning: state diagrams, basic data modeling, flow charts, tech stacks, etc. Scheduled: Context-switching between your work and pull requests kills productivity. An alternative is for you or the team to designate time specifically to review pull requests with no review expectations at other times: you may but are not obligated. Other Pull Request Challenges Tightly Coupled: Also known as the left hand doesn’t know what the right hand is doing. The work encompasses changes in different areas, such as the database team defining a new collection and another team creating the microservice using it. If the collection access changes and the database team is not informed, the indexes to efficiently identify the documents may not be created. All-encompassing: A single pull request contains code changes for different work streams, resulting in dozens or even hundreds of files needing review. Confusing, overwhelming reviewers try but eventually throw up their hands in defeat in the face of overwhelming odds. Emergency: Whether actual or perceived, the author wants immediate, emergency approval to push the change through, leaving no time for opinions or problem clarification and its solution (correct or otherwise). No questions asked if leadership screams loud enough, guaranteed to deal with the downstream fall-out. Conclusions The reality is that many organizations have their software engineers geographically dispersed across different time zones, so it’s inevitable that code reviews and pull requests are asynchronous: it’s logistically impossible to get everyone together in the same (virtual) room at the same time. That said, the asynchronous nature of pull requests introduces different challenges that organizations struggle with, and the risk is that code reviews devolve into a checklist, no-op that just happens because someone said so. Organizations should constantly be looking to improve the process, to make it a value-add that improves the overall quality of their product without becoming bureaucratic overhead that everyone complains about. However, my experiences have shown that pull requests can introduce quality problems and tech debt without anyone realizing it until it’s too late.
John Vester
Staff Engineer,
Marqeta @JohnJVester
Seun Matt
Engineering Manager,
Cellulant