DZone Research Report: A look at our developer audience, their tech stacks, and topics and tools they're exploring.
Getting Started With Large Language Models: A guide for both novices and seasoned practitioners to unlock the power of language models.
The cultural movement that is DevOps — which, in short, encourages close collaboration among developers, IT operations, and system admins — also encompasses a set of tools, techniques, and practices. As part of DevOps, the CI/CD process incorporates automation into the SDLC, allowing teams to integrate and deliver incremental changes iteratively and at a quicker pace. Together, these human- and technology-oriented elements enable smooth, fast, and quality software releases. This Zone is your go-to source on all things DevOps and CI/CD (end to end!).
Source Code Management and Branching Strategies for CI/CD
What Is Platform Engineering?
Software development is a complex and dynamic field requiring constant input, iteration, and collaboration. The need for reliable, timely, and high-quality solutions has never been higher in today's fiercely competitive marketplace. Enter DevOps, a revolutionary approach that serves as the foundation for addressing such challenges. DevOps is more than just a methodology; it combines practices seamlessly integrating software development and IT operations for streamlining workflow. DevOps, with its emphasis on improving communication, promoting teamwork, and uniting software delivery teams, acts as a trigger for a development process that is more responsive and synchronized. DevOps is a crucial tool in modern software development services, helping businesses achieve better overall performance, increased customer satisfaction, faster time-to-market, and cost-effectiveness. It is a dynamic force that adapts easily to the changing demands of the industry and allows businesses to successfully and swiftly negotiate the challenges of software development. 7 Ways DevOps Enhance the Software Development Lifecycle Here are the top 7 ways that DevOps can enhance the software development lifecycle: 1. Accelerated Development DevOps encourages continuous integration and delivery, or CI/CD, which enables developers to release software more quickly and merge code changes more frequently. This shortens the time it takes to release updates or new features to production and speeds up the development cycle. The emphasis on automation and collaboration within the CI/CD framework empowers teams to respond to market demands, innovate swiftly, and maintain a competitive edge in the dynamic software development landscape. 2. Automated Testing DevOps automation encompasses the whole software delivery pipeline and does not just help eliminate bugs. Automated processes for continuous integration, deployment, and testing enable the timely and quicker release of new features and upgrades. This speeds up the development process and frees teams from manual labor and repetitive tasks to concentrate on strategic and important work. Automation increases productivity and builds a solid, future-ready development ecosystem by ensuring reliable software delivery and laying the groundwork for scalability, adaptability, and continual improvement. 3. Enhanced Collaboration DevOps creates a collaborative culture by eliminating silos between the development and operations teams. It guarantees that all parties participating in the software development process are on the same page, resulting in smoother workflows and fewer bottlenecks by promoting communication and shared responsibilities. The collaborative environment fosters ongoing learning and development where each team member's specialty complements the others', creating an inventive and adaptable culture. 4. Infrastructure as Code (IaC) DevOps strongly emphasizes handling infrastructure like code, allowing teams to use code scripts for infrastructure management and provisioning. This approach makes resource allocation more efficient, guarantees consistency across environments, and simplifies scaling. Moreover, DevOps' Infrastructure as Code (IaC) offers a template for managing infrastructure, facilitating team collaboration, and controlling versions. This guarantees the infrastructure's reproducibility in various settings and encourages transparency and change traceability. Infrastructure as a Service (IaaS) automates infrastructure provisioning, minimizing human error and facilitating quick deployment of complete environments. Teams can easily adjust to evolving needs, promoting an agile and responsive development process. 5. Improved Feedback and Monitoring DevOps incorporates proactive alerting techniques and real-time monitoring to inform teams of any issues before they affect users. This predictive strategy makes preemptive intervention possible, reducing downtime and guaranteeing a flawless user experience. The continuous feedback loop is an invaluable source for ongoing improvements and facilitates quick issue solutions. By utilizing monitoring and input insights, development teams can arrive at well-informed decisions, enhance performance, and align software features with changing user expectations. This approach eventually ensures an effective and user-centric software ecosystem. 6. Enhanced Security DevSecOps, or integrating security controls throughout the development process, is a component of DevOps methods. Automated security checks, early vulnerability resolution, and ongoing monitoring all assist in identifying and reducing potential security concerns. Moreover, security is considered an essential component of the development lifecycle rather than a problem that arises after deployment in the DevSecOps paradigm. Automated security checks and scans into the development workflow ensure easier and early vulnerability identification. Constant monitoring provides proactive identification and mitigation of potential security problems and protects against ever-evolving attacks. 7. Efficient Utilization of Resources DevOps uses containerization and automation to promote resource efficiency. Devices such as Docker and Kubernetes make distinctive deployment across many settings possible, which maximizes resource usage and minimizes incompatibilities. Teams can now assign resources dynamically based on the application's demands owing to resource provisioning and administration automation, which improves infrastructure utilization efficiency. This method optimizes resources and lowers operational overhead while promoting a resource-efficient environment that easily fits the needs of modern software development. Final Thoughts DevOps automation has numerous advantages that help advance and improve the software development process. When DevOps is properly integrated, it can cause a paradigm change that affects software functionality and completely changes an organization's operating model. This shift goes beyond simply enhanced software capabilities; it also promotes better communication, higher performance standards, the production of superior products, and increased productivity in general. DevOps offers a comprehensive approach leading to improved operational efficiency, quality digital products, and increased productivity. It does more than just improve software operations. Businesses benefit greatly from the revolutionary potential of DevOps, which guarantees that they are not only meeting market demands but also raising the bar for innovation and operational excellence.
Mid to large software development projects involve many people with multiple teams, resources, tools, and stages of development. They all need to be managed and streamlined in a manner to get not only the desired product but also make sure that in the future, it is easy to manage and maintain under evolving circumstances. There are quite a number of project management models and techniques that organizations typically follow. DevOps is one of them where the agencies take an agile approach to the software development process with the primary objective of continuous improvement. DevOps is easy when you know your organization and can adopt changes easily and there is a right attitude to make DevOps come true in your organization. Learn more about the DevOps toolchain. The 6 Cs are the best practices of DevOps that are typically followed by any organization to develop faster and provide more reliable updates to the customer. 1. Continuous Business Planning Continuous business planning adds to the agility of the development process and allows teams to make smarter and quicker decisions. The problems and delays that occur or may occur can be quickly identified and planned accordingly to adapt to the changing circumstances. Customer requirements can be anticipated and stay one step ahead of their needs. For example, the team may decide to remove a particular feature of the product that they may have been implementing and reallocate the resources towards a different feature that the market research suggests is customer preference now. The changes are quick with continuous planning. Also, continuous planning helps in anticipating potential risks and dependencies. The teams can take proactive measures to handle any kind of situation that occurs. 2. Collaborative Development DevOps completely eradicates the gap between development and operation. It helps in establishing close communication among all the team members and facing success or failure together. Each member is a part and parcel of all the intricacies of the development life cycle. Any team can come forward to solve the problem that may have occurred. For example, a software has been deployed in a remote location and a glitch has been reported that requires immediate attention. The team quickly takes up the issue live with all members actively participating. They spend hours analyzing the issue and provide an immediate workaround to solve the problem quickly. This is what collaborative development does: creating an environment of close communication among teams which is crucial for any successful operation. 3. Continuous Testing Testing is performed at regular intervals to reflect any changes made to the code. It is a part of the software delivery pipeline to get quick feedback on the changes made in the code repository. The central idea behind continuous testing is to quickly identify the problem, inform the development team, and solve it as soon as possible. Continuous testing is not only vital to the delivery of a reliable product to the customer but also adds to the pace of continuous improvement with an efficient use of the feedback loop by the development team. 4. Continuous Release and Deployment With the continuous release of new features, bug fixes and improvements can quickly and consistently be delivered. The primary focus of continuous release is to automate and streamline the process of delivery of code changes in the production environment. The build and testing processes are automated with continuous integration (CI) as part of the continuous release. Although the primary aim of both processes is to increase the speed, frequency, and reliability of software release, there is a subtle difference between continuous release and continuous deployment. The choice between continuous release and continuous deployment depends on the needs of the development team, the requirements of the software, and the risks involved. In continuous release, the decision to deployment is typically a manual process while in continuous deployment it is completely automated. Deployment is done automatically as soon as any code changes are passed through testing. Learn more about process flow for DevOps deployments. 5. Continuous Monitoring This is needed to monitor changes and address errors and mistakes spontaneously whenever they happen. It is an automated process of early detection of compliance issues that may occur at any stage of the DevOps process. For example, as an application deployed in the cloud, the DevOps security team must be aware of and continuously monitor any sort of security vulnerabilities that are present or may occur without compromising the privacy of the customer who is using it for their business. Not only error or security, continuous monitoring includes any area that requires attention and provides feedback for immediate rectification. 6. Customer Feedback and Optimization This allows for an immediate response from your customers for your product and its features and helps you modify accordingly. Feedback is very important for continuous improvement. The feedback loop works at all aspects of the delivery process such as quality metrics, customer satisfaction, experience and sentiments, service level agreement, data environment, etc. Optimization is vital to a reliable and efficient software product that adheres to the quality metrics of the organization's standard. It aims at the needs of the hour and functions exactly to the customer's requirement. The feedback loop and continuous monitoring provide valuable input for continuous optimization of the software. Conclusion DevOps originates from Enterprise Software Management and Agile Software Methodology with the purpose of automating most, if not all, the process from planning to deployment of the development lifecycle. A good DevOps organization takes care of these 6Cs. Although this is not a must-have model, it is one of the more sophisticated models. CD pipelines, CI/CD tools, and containers make things easy. When you want to practice DevOps, having a microservices architecture makes more sense.
In this article, you will learn how to run Ansible Playbook from the Azure DevOps tool. By incorporating Ansible Playbooks into the Azure release pipeline, organizations can achieve streamlined and automated workflows, reducing manual intervention and minimizing the risk of errors. This enhances the efficiency of the release process, accelerates time-to-market, and ensures a standardized and reliable deployment of applications and infrastructure on the Azure platform. What Is an Ansible Playbook? An Ansible Playbook is a configuration management and automation tool to define and execute a series of tasks. It is particularly valuable in managing infrastructure as code, ensuring consistency and repeatability in the deployment and configuration of systems. In the context of Azure release pipelines, Ansible Playbooks play a crucial role in automating the deployment and configuration of resources within the Azure environment. They allow for the definition of tasks such as provisioning virtual machines, configuring networking, and installing software components. This tutorial assumes that the Ansible utility is installed and enabled for your Project in Azure DevOps. You can download and install the utility from this link, and get it enabled from your Azure DevOps Administrator. Related: Learn how to schedule pipelines in Azure DevOps. How to Run Ansible Playbook From Azure DevOps Step 1: Create New Release Pipeline Create a new release pipeline with an empty job. Step 2: Add Artifacts in Release Pipeline Job Next, add Azure DevOps in artifacts, as I am using the Azure repository to store our playbook and inventory file. I have already pushed the inventory file and tutorial.yml playbook in my Azure repo branch, ansible-tutorial. Select your project, repo, and branch to add artifacts in your release pipeline. YAML xxxxxxxxxx 1 1 #tutorial.yml 2 - hosts: "{{ host }" 3 tasks: 4 - name: create a test file for ansible 5 shell: touch /tmp/tutorail.yml Step 3: Upload and Configure Secure Key in Stage 1 for Ansible-Playbook Authentication Use the SSH key for authentication on my target machine. To pass the SSH key, I will upload it using the Download Secure file utility available. Download Secure Utility This is used for storing secure files in your release pipeline like SSH key, SSL certs, and CA certs. During execution, files are downloaded in a temp folder and their path can be accessed by calling the reference variable (shown below). These files are deleted as the release job is completed. Enter the reference name as shown below. To access the file, use the variable $(<reference name>.secureFilePath). Ex: $(pemKey.SecureFilePath) Step 4: Change File Permission We will add a shell command-line utility to change the file permission to 400 before using it in the playbook. I have used $(pemKey.secureFilePath) to access the SSH key. Step 5: Add and Configure the Ansible Task Add the Ansible task and enter the playbook path as shown below. For an inventory, the location selects the file and the file path as shown below. Use additional parameters to pass a variable to the Ansible playbook. Use additional parameters to pass variables and other command line parameters to the playbook at run time. To pass the path of the SSH key, I have used ansible_ssh_private_key_file=$(pemKey.secureFilePath). Also, you can use the variable ansible_ssh_common_args='-o StrictHostKeyChecking=no' to disable the host key checking in your Ansible playbook, if it's failing due to a host key verification error. Step 6: Save the Release Pipeline and Create a Release To Run the Playbook We can see our release completed successfully. Summary Ansible playbook ran successfully from Azure DevOps. If you want to use a username and password instead of an SSH key, you can pass the Linux creds using additional parameters using secrets variables so that the creds will be masked, or you can also use a shell command-line utility to set creds in an environment variable and Ansible will read from there.
Jenkins is an open-source, self-contained automation server that includes the features of continuous integration (CI), continuous delivery, and deployment (CD) pipelines. Continuous integration ensures that team members commit their work on a regular basis so that build can be conducted upon significant change. The CI generates a continuous feedback loop of the software and any defect or deficiencies identified are resolved early and easily. Continuous delivery (CD), on the other hand, automates the process of build, test, and deployment operations. In both cases, the Jenkins server states that the best practices are followed and the desired state is achieved. Since the process is automated Jenkins helps in increasing the pace of release. This eradicates limitation of the manual deployment and reduces the stress on the development and operation team significantly. Jenkins provides many ways to set up a CI/CD environment for almost any code language and source code repository. It has a suite of plugins to implement CI/CD pipelines in .NET MVC application development and ensure high-quality deliverables. These plugins can be extended to support MSBuild files, Git version control, CVS, Subversion, etc. Once the MSBuild plugin is installed and configured, the Jenkins pipeline is initiated. The automation server takes care of almost the entire development life cycle starting from integration and testing to the deployment of the the .NET MVC application development. The DevOps team can focus on the product, update, and new features while many intricate processes are handled behind the scenes by the Jenkins server. For more details about automation servers, review our coverage of Jenkins vs. Bamboo. The article provides a basic step-by-step implementation of CI/CD for the .NET MVC framework using Jenkins. Step 1: Installing Build Tools Through Visual Studio Installer Before we proceed with actual building and deployment, we need to make sure we have build tools installed on the machine. This can be done through Visual Studio Build Tools, available from the Visual Studio Installer. Then we need to install build tools to build an MVC application. This can be done by clicking the Modify button for Visual Studio Build Tools and then selecting Web development build tools. If we didn't install these build tools, the msbuild command would only work in the Developer Command Prompt. Step 2: Installing Plugins for Jenkins We need to install plugins to use in Jenkins. Go to Manage Plugins. Find and install the GitHub Plugin. Find and install the MSBuild Plugin. I have implemented CI/CD using Jenkins. First of all, you need to download Jenkins. There are a number of ways to use it. I am using it as a Windows service on the machine. After starting the Jenkins service, you need to add your application and the configuration. First, click on "New Item." Then, give the application an Item Name, select the "Freestyle Project" option, and then click "OK." Step 3: The Configuration Now for the configuration. In the General section, you can enter the description of the item and discard the old builds. It will automatically discard builds. In the Source Code section, I used a Git repository. Here we have to provide the URL of the repository and the branch name from which you want to deploy the code. In the build environment, we want to make sure we choose to clean the Jenkins workspace before the build starts. Step 4: Restoring NuGet Packages When we commit the code in the repository, we don't commit the packages, so first we have to restore the NuGet packages. As you can see, the code is committed without NuGet dependencies. restore command Once the command is entered, you can see the packages folder created with all NuGet dependencies. Step 5: Using MSBuild Now we can actually execute the MSBuild command. This command requires that the MSBuild plugin be installed in Jenkins, the path to the sln file, and optionally, we can also provide the PackageFileName attribute in the command line with the path and package name. Command: /t:clean;build;package /p:PackageFileName="C:\Program Files (x86)\Jenkins\workspace\HelpDesk_CI\HelpdeskMVC.zip" Step 6: Using MSDeploy Command MSDeploy command can be used to deploy the zip created in the previous step to be deployed to IIS. Command: C:\"Program Files (x86)"\IIS\"Microsoft Web Deploy V3"\msdeploy.exe -verb:sync -source:package="HelpdeskMVC.zip" -dest:auto -setParam:name="IIS Web Application Name",value="Default Web Site/Helpdesk" Step 7: Install Web Deployment Tool In order to deploy the zip, you need to install "Web Deployment Tool 2.1". You can install this by right-clicking on "Default Web Site" and then going to "Install Application from Gallery". Then search for "Web Deployment Tool 2.1" in the search box. In my case, it is already installed. Step 8: Post Build/Deploy Actions You can add various post-build actions like sending email notifications, archiving the deployable artifact, etc. You have now created a very basic CI/CD for your .NET MVC application. Jenkins gives us flexibility for adding more complex builds.
In the dynamic realm of Android app development, efficiency is key. Enter Azure DevOps, Microsoft's integrated solution that transforms the development lifecycle. This tutorial will show you how to leverage Azure DevOps for seamless Android app development. What Is Azure DevOps? Azure DevOps is not just a version control system; it's a comprehensive set of development and deployment tools that seamlessly integrate with popular platforms and technologies. From version control (Azure Repos) to continuous integration and delivery (Azure Pipelines), and even application monitoring (Azure Application Insights), Azure DevOps offers a unified environment to manage your entire development cycle. This unified approach significantly enhances collaboration, accelerates time-to-market, and ensures a more reliable and scalable deployment of your Android applications. Azure DevOps is a game-changer in the development of feature-rich Android mobile applications, offering a unified platform for version control, continuous integration, and automated testing. With Azure Pipelines, you can seamlessly orchestrate the entire build and release process, ensuring that changes from each team member integrate smoothly. The integrated nature of Azure DevOps promotes collaboration, accelerates the development cycle, and provides robust tools for monitoring and troubleshooting. This unified approach not only helps meet tight deadlines but also ensures a reliable and scalable deployment of the Android application, enhancing the overall efficiency and success of the project. Use the azure-pipelines.yml file at the root of the repository. Get this file to build the Android application using a CI (Continuous Integration) build. Follow the instructions in the previously linked article, "Introduction to Azure DevOps," to create a build pipeline for an Android application. After creating a new build pipeline, you will be prompted to choose a repository. Select the GitHub/Azure Repository. You then need to authorize the Azure DevOps service to connect to the GitHub account. Click Authorize, and this will integrate with your build pipeline. After the connection to GitHub has been authorized, select the right repo, which is used to build the application. How To Build an Android Application With Azure Step 1: Get a Fresh Virtual Machine Azure Pipelines have the option to build and deploy using a Microsoft-hosted agent. When running a build or release pipeline, get a fresh virtual machine (VM). If Microsoft-hosted agents will not work, use a self-hosted agent, as it will act as a build host. pool: name: Hosted VS2017 demands: java Step 2: Build a Mobile Application Build a mobile application using a Gradle wrapper script. Check out the branch and repository of the gradlew wrapper script. The gradlew wrapper script is used for the build. If the agent is running on Windows, it must use the gradlew.bat; if the agent runs on Linux or macOS, it can use the gradlew shell script. Step 3: Set Directories Set the current working directory and Gradle WrapperFile script directory. steps: - task: Gradle@2 displayName: 'gradlew assembleDebug' inputs: gradleWrapperFile: 'MobileApp/SourceCode -Android/gradlew' workingDirectory: 'MobileApp/SourceCode -Android' tasks: assembleDebug publishJUnitResults: false checkStyleRunAnalysis: true findBugsRunAnalysis: true pmdRunAnalysis: true This task detects all open source components in your build, security vulnerabilities, scan libraries, and outdated libraries (including dependencies from the source code). You can view it from the build level, project level, and account level. task: whitesource.ws-bolt.bolt.wss.WhiteSource Bolt@18 displayName: 'WhiteSource Bolt' inputs: cwd: 'MobileApp/SourceCode -Android' Step 4: Copy Files Copy the .apk file from the source to the artifact directory. - task: CopyFiles@2 displayName: 'Copy Files to: $(build.artifactStagingDirectory)' inputs: SourceFolder: 'MobileApp/SourceCode -Android' Contents: '**/*.apk' TargetFolder: '$(build.artifactStagingDirectory)' Use this task in the build pipeline to publish the build artifacts to Azure pipelines and file share.it will store it in the Azure DevOps server. - task: PublishBuildArtifacts@1 displayName: 'Publish Artifact: drop' The new pipeline wizard should recognize that you already have an azure-pipelines.yml in the root repository. The azure-pipeline.yml file contains all the settings that the build service should use to build and test the application, as well as generate the output artifacts that will be used to deploy the app's later release pipeline(CD). Step 5: Save and Queue the Build After everything is perfect, save and queue the build so you can see the corresponding task of logs to the respective job. Step 6: Extract the Artifact Zip Folder After everything is done, extract the artifact zip folder, copy the .apk file into the mobile device, and install the .apk file. Conclusion Azure DevOps is a game-changer for Android app development, streamlining processes and boosting collaboration. Encompassing version control, continuous integration, and automated testing, this unified solution accelerates development cycles and ensures the reliability and scalability of Android applications. The tutorial has guided you through the process of building and deploying an Android mobile application using Azure DevOps. By following these steps, you've gained the skills to efficiently deploy Android applications, meet tight deadlines, and ensure reliability. Whether you're optimizing your workflow or entering Android development, integrating Azure DevOps will significantly enhance your efficiency and project success.
Continuous Integration/Continuous Delivery (CI/CD) is a critical aspect of modern software development that brings efficiency, reliability, and speed to the development lifecycle. CI/CD enables developers to automate the building, testing, and deployment of software, ensuring that changes are integrated smoothly and consistently. In the context of Spring Boot-based Java applications, CI/CD becomes even more crucial. Spring Boot promotes a convention-over-configuration paradigm, making it easy to create standalone, production-grade Spring-based applications. By incorporating CI/CD practices, developers working with Spring Boot can streamline the deployment pipeline, catch bugs early in the development process, and deliver high-quality, reliable software with faster release cycles. This tutorial will guide the reader through the implementation of CI/CD in the context of Spring Boot, empowering them to optimize their development workflows and deliver robust Java applications with greater efficiency. I am very excited to share my experiences building Continuous Integration/Continuous Delivery (CI/CD) into Spring-Boot-based Java applications. First, let's establish everything we will learn in this tutorial: Create a Spring Boot Java App using Spring Initializr. Create a GitHub repository. Use Travis CI and Docker to implement CI/CD. Add Codecov to provide code coverage. Use SonarCloud to write stellar code. Build a project site using the GitHub site-maven-plugin. Deploy the app on Heroku using heroku-maven-plugin. Manage topics. Gradually, we'll add badges to the README.md file so that we can be notified in real time on the state of Travis CI, Docker, Codecov, and SonarCloud. Also, we'll add the license badge. Are you ready? If not take time to better understand or prepare yourself and continue to read this later. The code is available here. So just fork, it's all yours! How To Build CI/CD Into Spring Boot-Based Java Applications Step 1: Create a Spring Boot Java App Using Spring Initializr In this project, I used Spring Tool Suite 4 (STS 4) IDE; you are free to use whatever tool you find suitable for this project. STS 4 has the Spring Initializr built-in, so that's why I chose it for this project. This is what the STS 4 dark theme looks like: Click on File -> New -> Spring Starter Project. You will get: Please fill out the form as follows: Name: cicd-applied-to-spring-boot-java-app Group: com.cicd Artifact: cicd-applied-to-spring-boot-java-app Description: Implementing CI/CD on Spring Boot Java App Package: com.cicd.cicd-applied-to-spring-boot-java-app By default: Type: Maven Packaging: jar Java Version: 8 Language: Java You will get: Then, click Next. Click on Spring Web: Click on Finish. The new project will appear: Next, please open the CicdAppliedToSpringBootJavaAppApplication.java file. We can then add a basic endpoint: Right-click -> Run As -> Maven build: Then you will receive: To run the app, please add the following: Goals -> spring-boot:run: Click Run: The final result can be found here: http://localhost:8080/. Now, on to the next step! Step 2: Create a GitHub Repository First, you need to sign in or sign up. I'm already a GitHub user so I just signed in. You will be directed to the homepage: To create a new repository, click on the green button "New" or click here. You will then be directed here: Please fill out the form as follows: Repository name: cicd-applied-to-spring-boot-java-app (I chose to set the same name as the artifact field from step one) Description: Implementing Continuous Integration/Continuous Delivery on Spring Boot Java App Click on Public. Click on Initialize this repository with a README. Select the MIT license. Why? It's very simple. The following links are helpful to better understand why you need an MIT license. Here's how to choose an open-source license and how open-source licenses work and how to add them to your projects. Later, we'll add the .gitignore file. Then, click on Create repository: This is the new repository: I suggest you add a file named RESEARCHES.md. Why? While working on a project, you may face difficulties and need to ask for help. The goal is to save time when solving problems or fixing bugs. To create it, please click on Create new file. Then, fill the name field with RESEARCHES.md and edit the file as follows. CI/CD is an example of research and the links represent results. "##" makes bold text. Furthermore, click on the green button "Commit new file" at the bottom of the page: This is what we get: Now, please install Git (Git installation can be found here) and GitHub Desktop (GitHub Desktop installation can be found here). After installing these two tools, it's time to clone the project we started in step one. Open GitHub Desktop and select the repository we created previously as follows: Click on File -> Clone repository...: You'll get this pop-up: Just fill the search bar with "cicd;" you will find the repository: "cicd-applied-to-spring-boot-java-app" among the results: Select the repository and click on Clone: GitHub Desktop is cloning the repository: The repository is already cloned: At this stage, open the repository folder. This is my path. My repository folder contains three files: LICENSE, README.md, and RESEARCHES.md shown below: It's time to open the folder where the code is saved: Copy the content from the code folder and paste it into the repository folder. The repository folder looks as follows: It's important to ignore files and folders. We will not directly modify these files when working on a project. In order to do that, we'll make some changes to the .gitignore file from the repository folder. I used Sublime Text to edit that file. Here's what it should look like before any changes: Here's what it will look like after making changes. First, add: .gitignore. It should look like: Now, this is what the folder repository looks like on GitHub Desktop: Fill the summary field with "First Upload" and click "Commit to master": So what's next? Click on Push origin: The repository is now up-to-date on GitHub: Step 3: Use Travis CI and Docker to Implement CI/CD Note: If you're not familiar with either of these tools, check out this Travis CI Tutorial and Docker Getting Started tutorial to help you get started. Sign up or sign in with GitHub and make sure Travis CI has access to your repository. Then, create a file named .travis.yml, which contains instructions that Travis CI will follow: At first, this is what I get: Then, click on .travis.yml file: This is the repository on Travis CI: Now, we'll add a Travis CI badge so that we are notified about changes, etc. To edit the README.md file, please click on the pencil icon: We'll get this page: Add this text, but replace "FanJups" with your Travis CI username: Then, add a commit description "Adding Travis CI badge" and click on the Commit changes button: Then, we get: It's important to know that for every change you make, Travis CI will trigger a build and send an email. It's a continuous process: We successfully added Travis CI and its badge. Next, we'll focus on Docker. First, sign in or sign up on Docker Hub: Click on the Create Repository button: Fill out the form as follows: Name: cicd-applied-to-spring-boot-java-app (GitHub repository name) Description: Implementing Continuous Integration/Continuous Delivery on Spring Boot Java App (GitHub repository description) Visibility: choose Public Build Settings: select GitHub After clicking on the Create button: It's time to link our Docker repository to our GitHub repository. Click on Builds: Then, click on Link to GitHub: Select your GitHub repository: Now that the GitHub repository is selected, we need to make some changes: Autotest: select Internal and External Pull Requests Repository links: select Enable for Base Image Click on Save: We succeeded in linking our GitHub repository to the Docker repository. If you need help with Docker builds, this link is helpful. What's next? First, we'll install Docker. Then we'll make some changes to the code and Travis CI. To install Docker, go to Docker's Get Started page, select Docker for Developers, and click on Download Desktop and Take a Tutorial: To make sure you've installed Docker and verify it's running properly, open your command line and write "docker." Then validate: Now, go back to your IDE or text editor; we'll make some changes to the code. Create a file named "Dockerfile." To sum up what we've done so far, the Dockerile is useful when creating Docker images. To better understand the purpose of this file, this Dockerfile reference will help you. To keep things simple, I use this Callicoder Dockerfile example and make little changes. This is what the Dockerfile looks like: Next, here's the Dockerfile creating a process using STS 4. Select the project, then click on New -> File. Fill the file name field with "Dockerfile" and click the Finish button: Copy and paste the content of the Dockerfile presented previously: Before making some changes to the pom.xml, let's look at the actual content. We add Spotify's dockerfile-maven-plugin to push the project on Docker Hub. Furthermore, we add maven-dependency-plugin as explained in a previous article, "Getting Started With Spring Boot and Docker," which states: "... to ensure the jar is unpacked before the Docker image is created, we add some configuration for the dependency plugin." To continue, we will link Travis CI to Docker from our GitHub repository. Do you remember your Docker username and password? Well, you will have to do so in order to proceed. We will create two environment variables in Travis CI. To get there, just copy and paste this (https://travis-ci.com/GITHUBUSERNAME/cicd-applied-to-spring-boot-java-app) into your browser. But replace GITHUBUSERNAME with your correct username or click on your Travis CI badge present in README.md: Click on More options -> Settings: Fill in the form as follows: Name: DOCKER_PASSWORD Value: yourdockerpassword Click Add button Name: DOCKER_USERNAME Value: yourdockerusername Click the Add button: To deploy on Docker, we'll use "mvn deploy" as explained by Spotify. The Apache Maven Project explains the role of the Apache Maven Deploy Plugin as a plugin used to "add artifacts to a remote repository." DZone’s previously covered how to publish Maven artifacts using pipelines or Maven jobs. But we don't want to add artifacts to a remote repository: we just want to deploy it on Docker. So, when we call the deploy phase, we must include a valid <distributionManagement/> section POM. However, that's not the purpose here. Thus, we'll add this property in pom.xml: <maven.deploy.skip>true</maven.deploy.skip> If we don't add this property, this error will occur: [ERROR] Failed to execute goal org.apache.maven.plugins:maven-deploy-plugin:2.8.2:deploy (default-deploy) on project cicd-applied-to-spring-boot-java-app: Deployment failed: repository element was not specified in the POM inside distributionManagement element or in -DaltDeploymentRepository=id::layout::url parameter -> [Help 1] At this stage, it's time to use those two Docker environment variables. Just copy and paste this new .travis.yml and push it on GitHub: Commit description: "Linking Travis CI to Docker". We received a beautiful red cross and Travis CI badge, meaning a beautiful error! But ignore that for now: we'll correct it later! Ladies and gentlemen, I'm happy to present to you: our beautiful error! Just go to the Travis CI repository and check out the beautiful build log: The command "mvn deploy" exited with 1. We've already added the Travis CI badge. Now, it's time to do the same for Docker. Go to Shields.io. On the search bar, write "docker." Then, we'll get the following results: Click on Docker Cloud Build Status: What if I told you we'll get an error here also? Nevermind - just fill out the following form: Click on the Copy Badge URL. Now, go back to the GitHub repository and edit README.md. We'll add the following Docker badge: Commit description: "Adding Docker badge": We are all winners here. So let's get it right! Previously, we made changes on the pom.xml and created a Dockerfile. All of those errors occurred due to the fact that Maven didn't know how to handle the deployment on Docker and the Dockerfile was absent, so it was impossible to push images. The time has come to push those changes (Dockefile and pom.xml) on GitHub using GitHub Desktop: Now, we have two ugly green badges meaning success! Just kidding! That's beautiful. To be sure, check your emails. You should have received two emails: one from Travis CI and the other from Docker. Before moving on to step three, we'll run the app without using Docker only. Just remember to replace "fanjups" with your own Docker Hub username: I got the following error: "Invalid or corrupt jarfile /app.jar." It's all about encoding, so I'll add those two properties to the pom.xml. Now, it's time to commit on GitHub. If you're confused about writing useful commit messages, DZone's covered this topic in the past. Before running the app again, it's important to list all containers using docker ps. Then check the "CONTAINER ID," stop (docker stop "CONTAINER ID"), and remove it (docker rm "CONTAINER ID") because it's persisted, as explained by this post on Spring Boot with Docker. Then, we'll run again the app to ensure that everything works well: I was so happy when I solved this problem! The core steps are now over. We've successfully implemented the CI/CD. Now, let's add some useful tools! Step 4: Add Codecov for Code Coverage First, make sure you've updated the project on your computer: Click on Pull Origin: Copy the modified files that we'll use in IDE from our GitHub folder and then paste them into our workspace. In this case, we'll only copy and paste the pom.xml. Don't forget to refresh the project on STS 4 and do whatever it takes to include changes. To better use this tool, we make some changes by adding a unit test. First, create a new package — com.cicd.cicdappliedtospringbootjavaapp.controller. Secondly, create a new class HelloController.java and change CicdAppliedToSpringBootJavaAppApplication.java as follows: The folder looks like this: Before running the app on your computer, you can skip the entire dockerfile plugin because the deployment will take place on the GitHub repository managed by Travis CI. To do this, just add this option (-Ddockerfile.skip), as explained by Spotify dockerfile-maven-plugin's usage, to your Maven command. Finally, we get mvn spring-boot:run -Ddockerfile.skip. Now, login or sign up to Codecov with GitHub. Click on Account -> Repositories -> Add new repository. Just choose your GitHub repository or follow this link (https://codecov.io/gh/GITHUB_USERNAME/GITHUB_REPOSITORY). But remember to replace GITHUB_REPOSITORY with cicd-applied-to-spring-boot-java-app and the GITHUB_USERNAME with yours: Last time, we added two environment variables to Docker. Now, we added the Codecov environment variable: CODECOV_TOKEN, as well. Copy your token and add it to your Travis CI repository. We made some changes to the pom.xml by adding the jacoco-maven-plugin. Go back to the GitHub repository and we'll edit .travis.yml. What Time Is It? Codecov Badge Time! Go to your Codecov repository and Click on Settings -> Badge -> Copy (from Markdown). Then, go to your GitHub repository and paste it into README.md. Finally, push your changes from your computer to GitHub. Code Coverage 60%: Perhaps, you want to deactivate the coverage and activate it later. If so, go ahead and create a file named codecov.yml. Now, it's useful to know coverage so I'll comment on each line with "#." If you wish to learn more, click here to read the docs. Now, on to step 5! Step 5: Use SonarCloud to Write Great Code To start, login or sign up with GitHub. Click on + (Analyze new project or create new organization) -> Analyze new project -> Import another organization -> Choose an organization on GitHub. Next, make sure SonarCloud has access to your GitHub repository. Now that we're back to SonarCloud, choose a Key. I suggest using "cicd-applied-to-spring-boot-java-app" as the Key. Then, click on Continue -> Choose Free plan -> Create Organization -> Analyze new project -> Select your GitHub repository -> Set Up -> With Travis CI -> Provide and encrypt your token -> Copy. Go back to Travis CI and create a SonarCloud environment variable named SONAR_TOKEN. As a value, paste the token you've just copied. Now, back to SonarCloud and click on Continue -> Edit your .travis.yml file -> Choose Maven as build technology -> Configure your platform -> Configure the scanner -> Copy. I chose to write the SonarCloud script under after_success instead of script because I focus on deployment here. You are free to place it where you want. Also, create a file named sonar-project.properties and edit it as follows: sonar.projectKey=GITHUBUSERNAME_cicd-applied-to-spring-boot-java-app Go back to SonarCloud and click on Finish. To end, we add a SonarCloud badge into README.md. To get the badge for another project, use groupId:artifactId. Here's the SonarCloud badge already added: Step 6: Build a Project Site Using the GitHub site-maven-plugin To get started, open pom.xml on your computer. We add: OAuth token and GitHub servers as properties org.apache.maven.plugins:maven-site-plugin com.github.github:site-maven-plugin org.apache.maven.plugins:maven-project-info-reports-plugin Developers section Organization section issueManagement section Software Configuration Management (SCM) section "The important configuration is to allow the OAuth token to be read from an environment variable (excerpt from pom.xml)," as explained by Michael Lanyon's blog. "To create the token, follow these instructions." Copy the token, then create a new environment variable named GITHUB_OAUTH_TOKEN. Push pom.xml to GitHub and edit .travis.yml by adding "- mvn site" under after_success. After pushing all changes, gh-pages branch and project site are created. Each time you push, the site will be updated if necessary. To see the site, click on environment -> View deployment (under Deployed to github-pages). Here's a link to my GitHub repo. Step 7: Deploy the App on Heroku Using heroku-maven-plugin Here we go! Log in or sign up for Heroku. Click on New -> Create new app. To continue, enter an app name (cicd-spring-boot-java-app). cicd-applied-to-spring-boot-java-app is too long as an app name. Choose a region and click Create app. Next, click Connect to GitHub. Search the GitHub repository. Once you find it, click Connect. Check Wait for CI to pass before deploying. Click Enable Automatic Deploys. Go to Account settings. Copy your API KEY and create a new Travis CI environment variable named HEROKU_API_KEY. This is the last environment variable linked to this project. It's time to edit pom.xml and push it to GitHub. We add: full-artifact-name as a property com.heroku.sdk:heroku-maven-plugin Now, we focus on .travis.yml. To deploy on Docker Hub, we used mvn deploy. To deploy on Heroku, we'll use mvn heroku:deploy. In order to deploy on Docker and Heroku, we'll repeat the deploy phase twice, and risk exceeding timeout. To avoid that, we'll only use mvn heroku:deploy. We succeeded in deploying on Heroku! Hooray! Now, go to https://cicd-spring-boot-java-app.herokuapp.com/. Now, it's time for the final step. Step 8: Manage Topics What does it mean to be in the last stage!? Topics are helpful when getting a quick overview of the project. Go back to the GitHub repository, click on Manage topics, and add whatever you want. By the way, we added a MIT license badge into the README.md and license section of the pom.xml! Conclusion Congratulations! You're all done. To sum things up, you learned how to implement CI/CD on a Spring Boot Java app using Maven, GitHub, Travis CI, Docker, Codecov, SonarCloud, and Heroku. This is a template you are free to use. If you're confused, please ask in the comments. I also suggest reading the docs available as many times as necessary. The code is available here. So just fork; it's all yours! Further Reading Setting Up a CI/CD Pipeline With Spring MVC, Jenkins, and Kubernetes on AWS DevOps Tutorial: Docker, Kubernetes, and Azure DevOps
Azure DevOps is a Microsoft-provided "one-stop shop" for software development. It allows for developers, functional users, and any other stakeholders to collaborate efficiently and effectively so that any and all requirements for a software project can be satisfied. One of Azure DevOps' most popular components is its version control functionality, which allows developers to work on large-scale software projects without conflicting with one another. This functionality also works seamlessly with the Git version control solution. Azure DevOps, in spite of being part of the Microsoft ecosystem, supports any language or coding environment. But another feature that Azure DevOps provides is Pipelines. Azure Pipelines has proven to be an invaluable software testing tool, and while manually running an individual pipeline after a pull request or a commit helps to move testing along, you might find it more useful to automatically run a pipeline, based on the events you choose. Azure DevOps allows you to create and deploy pipeline triggers to achieve this end. A pipeline trigger, as the name suggests, is a mechanism provided by Azure DevOps that allows for an Azure Pipeline to run based on certain events happening. They function similarly to database triggers, in that the database will take a specific pre-defined action based on data being inserted or modified. Pipeline triggers are the building blocks of continuous integration (CI), which is "the process of automatically building and testing code every time a team member commits code changes to version control." This article demonstrates how to trigger a build pipeline for scheduled CI and pull requests using the Azure DevOps build pipeline trigger. Enable CI in Your Azure DevOps Project Step 1 Click on "Pipelines" in the left-hand menu: Step 2 On the "Pipelines" screen, click on the "More Options" button on the right of your specific pipeline, then click "Edit": Step 3 On the next screen, the one which shows your YAML code, click on the "More Actions" button near the upper right-hand corner of the screen, then click "Triggers": Step 4 The build pipeline triggers tab specifies the events that trigger builds and specifies the same build pipeline for CI and scheduled builds. Builds are configured by default with a CI trigger on all branches. Control which branch gets triggered with sample syntax. Step 5 Include the branches you want to trigger and then exclude the branches you don't want to trigger. trigger: branches: include: - master exclude: - develop/* Step 6 In addition to the "define certain branches" in the branches lists, configure triggers based on tags. trigger: branches: include: refs/tags/{test} Step 7 Disable the CI Builds entirely by specifying trigger: none. By using the Scheduled trigger, the pipeline is triggered every day or on random days. Further Reading: Implement CI/CD For Multibranch Pipelines. Build Completion Triggers When a vast number of products have a component that depends on another, these components are often independently built. When an upstream service changes (e.g., packages), the downstream service has to be rebuilt and revalidated. Usually, people manage these dependencies manually. With Azure DevOps, the CI build triggers a build upon the successful completion of another build. Artifacts built by an upstream can be downloaded and used in the later build, and the build will generate variables such as Build.TriggeredBy.BuildId and Build.TriggeredBy.DefinitionId. Creating a Pull Request Pull requests are used to review and merge code changes in a Git project. A pull request is when teams review code and give feedback on changes before they merge it into the master branch. Reviewers can go through the proposed code changes and comments and approve or reject the code. Step 1 Enable pull request validation under triggers, and specify the branch to merge the code. Step 2 Commit to a file at the root repository on a topic branch (developing). Step 3 After committing code into the topic branch, create a pull request. Step 4 Create a pull request, choose "Master" branch as the base, and compare it as a topic branch. This will compare to the master branch. Step 5 Mention the title and comments about the pull request, then create a pull request. Once the pull request is created, the CI build will automatically start. Step 6 Go through the details, and it will navigate to the Azure DevOps portal. The build will start and run automatically. You can view the build with the build ID and show a Pull Request build. Step 7 Squash merging keeps the default branch histories clean. When a pull request is completed, merge the topic branch into the default branch (usually Master). This merge adds the commits of the topic branch to the main branch and creates a merge commit to make any conflicts between the default and develop branches. Step 8 When squash merging is done, it is a better practice to delete the source branch. The build is triggered through Continuous Integration (CI). DZone’s previously covered release pipelines using Azure DevOps. The azure-pipelines.yaml file is shown below: trigger: branches: include: - master exclude: - develop/* trigger: branches: include: refs/tags/{test} exclude: refs/tags/{testapp} ## if dont specify any triggers in the build, mention default one as like below. trigger: branches: include: - '*' # must quote since "*" is a YAML reserved character; we want a string ## Batch Building # specific branch build with batching trigger: batch: true branches: include: - master # specific path build trigger: branches: include: - master - releases/* paths: include: - docs/* exclude: - docs/README.md # A pipeline with no CI trigger trigger: none
This article will demonstrate how to build a complete CI/CD pipeline in Visual Studio and deploy it to Azure using the new Continuous Delivery Extension for Visual Studio. Using CI allows you to merge the code changes in order to ensure that those changes work with the existing code base and allows you to perform testing. On the other hand, using CD, you are repeatedly pushing code through a deployment pipeline where it is built, tested, and deployed afterward. This CI/CD team practice automates the build, testing, and deployment of your application, and allows complete traceability in order to see code changes, reviews, and test results. What Is Visual Studio? Visual Studio is a powerful Integrated Development Environment (IDE). This feature-rich IDE has a robust environment for coding, debugging, and building applications. Azure DevOps (previously VS Team Services) has a comprehensive collection of collaboration tools and extensions that closely integrates the CI/CD pipeline of the Visual Studio environment. The CI (Continuous Integration) updates any code changes to the existing code base while CD (Continuous Deployment) pushes it through the deployment pipeline to build, test, and deploy further. The Visual Studio with CI/CD extensions thus automates the build, deployment, and testing process of software development. Not only that, it allows complete traceability in order to see code changes, reviews, and test results. The quality of software is largely dependent on the process applied to develop it. The automated system of The CI/CD practices is focused on this goal through continuous delivery and deployment. Consequently, this not only ensures software quality but also enhances the security and profitability of the production. This also shortens the production time to include new features, creating happy customers with low stress on development. In order to create a CI build, a release pipeline, and Release Management that is going to deploy the code into Azure, all you need is an existing web-based application and an extension from the marketplace. DZone’s previously covered how to build a CI/CD pipeline from scratch. How To Build a CI/CD Pipeline With Visual Studio Step1: Enable the Continuous Delivery Extension for Visual Studio In order to use the Continuous Delivery Tools for Visual Studio extension, you just need to enable it. The Continuous Delivery Tools for Visual Studio extension makes it simple to automate and stay up to date on your DevOps pipeline for other projects targeting Azure. The tools also allow you to improve your code quality and security. Go to Tools, and choose Extensions and Updates. From the prompted window, select Continuous Delivery Tools for Visual Studio and click Enable. *If you don't have Continuous Delivery Tools installed, go to Online Visual Studio Marketplace, search for "Continuous" and download it. Step 2: Create a Project in Team Services In this step, you are going to create a project in Team Services and put your project code there without leaving your IDE. Team Services is a tool that allows you to build Continuous Integration and Continuous Delivery. Go into the Solution Explorer, and right-click on your web-based project. Click on the new context menu Configure Continuous Delivery. A new window is displayed Configure Continuous Delivery. Click on the Add this project to source control plus button. Click on the Publish Git Repo button located in the Publish to Visual Studio Team Services section in Team Explorer. Your Microsoft Account is automatically fetched from your IDE. Also is displayed the Team Services Domain which will be used and your Repository Name. Click on the Publish Repository button in order to create a project in Team Services. After the synchronization is finished you will see that your project is created in the Team Explorer. Now your project is created into the Team Services account (the source code is uploaded, there is a Git Repository and it is generating a continuous delivery pipeline automatically). 7. In the output window, you can see that your CI/CD is set up for your project. 8. After a while, you are going to get 3 different links: Link to the build Link to the release Link to the assets created in Azure which is going to be the target for your deployment (application service) Step 3: Open the Project in Team Services A build definition is the entity through which you define your automated build process. In the build definition, you compose a set of tasks, each of which performs a step in your build. Choose the Build Definition link provided in the Output window and copy. Paste it into a browser in order to open the project containing your application in Team Services. The summary for the build definition is displayed. You can see that the build is already running. Click on the build link. It is shown as an output of your build server which is running your build automatically. Click on the Edit build definition. Add an additional task. Customize the tasks that are already there. Step 4: Test Assemblies Task Each task has a Version selector that enables you to specify the major version of the task used in your build or deployment. When a new minor version is released (for example, 1.2 to 1.3), your build or release will automatically use the new version. However, if a new major version is released (for example, 2.0), your build or release will continue to use the major version you specified until you edit the definition and manually change to the new major version. Click on the Test Assemblies. You can see a little flag icon which means that a new preview version of this task is available. Click on the Flag Icon and choose version 2* in order to preview. There are several new items shown for the Test Assemblies. One of them is Run only impacted tests. This is an item that allows tools to analyze which lines of code were changed against the tests that were run in the past and you will know which tests execute which lines of code (you will not have to run all of your tests: you are able to run only the tests that were impacted by the changes). Run tests in parallel on multi-core machines is an item that allows your tests to run in such a way as to use all the cores you have available. Using this item you will effectively increase the number of tests running at the same time, which will reduce the time to run all the tests. Step 5: Add an Additional Task A task is the building block for defining automation in a build definition, or in an environment of a release definition. A task is simply a packaged script or procedure that has been abstracted with a set of inputs. There are some built-in tasks in order to enable fundamental build and deployment scenarios. Click on the Add Task plus button in order to create a new additional task. An enormous list of tasks is displayed that can be run out of the box allowing you to target any language/platform (Chef support, CocoaPods, Docker, Node.js, Java). If you want to install another feature or extension that is not listed, simply click on the link Check out our Marketplace which is displayed above the list of tasks. Step 6: Setting Encrypted and Non-Encrypted Variables Variables are a great way to store and share key bits of data in your build definition. Some build templates automatically define some variables for you. Go and click on the second tab named Variables (next to the tab Tasks). Click on the padlock located next to the variable value, in order to encrypt it. After encrypting, the value of the variable is displayed with asterisks, and no one can see this value except the person who encrypted it. Step 7: Turn on the Continuous Integration (CI) Trigger On the Triggers tab, you specify the events that will trigger the build. You can use the same build definition for both CI and scheduled builds. Go and click on the third tab named Triggers, where you can set up your Continuous Integration. Enabling the box Disable this trigger means that this build will run automatically whenever someone checks in code or, in other words, when a new version of the source artifacts is available. Step 8: Build Definition Options If the build process fails, you can automatically create a work item to track getting the problem fixed. You can specify the work item type. You can also select if you want to assign the work item to the requestor. For example, if this is a CI build, and a team member checks in some code that breaks the build, then the work item is assigned to that person. Go and click on the fourth tab named Options. Enable the box Create Work Item on Failure. CI builds are supposed to build at every check-in, and if some of them fail because the developer made an error, you can automatically create a work item in order to track getting the problem fixed. Default agent queue option is displayed in the second half of the Options. In the drop-down list are all available pools: Default (if your team uses private agents set up by your own) Hosted (Windows-based machine, if your team uses VS2017 or VS2015) Hosted Linux Preview (if your team uses development tools on Ubuntu) Hosted VS2017 (if your team uses Visual Studio 2017) Step 9: Build Summary You can see the summary of the build - in other words, everything that happened during the build - following the next steps: Code coverage All work items and tasks Deployments Step 10: Release Definition A release definition is one of the fundamental concepts in Release Management for VSTS and TFS. It defines the end-to-end release process for an application to be deployed across various environments. Remember that you as a developer, never have to leave VS in order to deploy the application from VS into Azure. A release definition is displayed that deployed the code into Azure. Click on the three dots located next to the particular release definition. From the displayed context menu, select Edit. Series of environments Tasks that you want to perform in each environment Step 11: Check if the Application Is Really Deployed From Visual Studio Into Azure Microsoft Azure is a cloud computing service for building, testing, deploying, and managing applications and services through a global network of Microsoft-managed data centers. In this step you will verify if your web application is deployed in Azure, following the next steps: Go to your Azure portal. Click on the Resource Groups. Search for the "demo." Click In the search results on your web project "e2edemo." Open the web application link. Further Reading: Release pipeline using Azure DevOps. Conclusion Continuous Integration is a software development practice in which you build and test software every time a developer pushes code to the application. Continuous Delivery is a software engineering approach in which Continuous Integration, automated testing, and automated deployment capabilities allow software to be developed and deployed rapidly, reliably, and repeatedly with minimal human intervention. High-performing teams usually practice Continuous Integration (CI) and Continuous Delivery (CD). VSTS not only automates the build, testing, and deployment of your application, but it gives you complete traceability to see everything in the build including changes to your code, reviews, and test results, as a tool which is fully supporting DevOps practices.
Cloud technology changes the way we reason about troubleshooting, maintaining, and deploying systems. We used to view hardware and software as separate entities. Different teams managed these entities with different priorities, using disparate methodologies. But that’s no longer the case; teams work together to improve resiliency and maximize agility. There’s no better example of this than Infrastructure as Code (IaC). Let’s take a deep dive into IaC. We'll examine how it makes your systems more reliable, manageable, and reproducible. What Is Infrastructure as Code (IaC)? Infrastructure as Code (IaC) is maintaining and configuring hardware with the same methods as software. Instead of configuring computers and network hardware manually, you use automated tools and version control (VCS) to manage their code. For many teams, the final step is adding continuous integration and continuous delivery/deployment (CI/CD) pipelines. When you deploy your systems with this, you've adopted the entire IaC stack. Benefits of Using IaC Infrastructure as Code (IaC) is a practice that uses machine-readable scripts to automate the provisioning and management of computing infrastructure. It has several important benefits: IaC makes your infrastructure consistent and reproducible. When you define your systems in code and manage that code in VCS, it’s easy to recreate and deploy them. You can easily build the same infrastructure many times, reducing the risk of human error. It also makes it easier to fall back to an old version of your infrastructure when something goes wrong. When you run your IaC tools in your CI/CD, your infrastructure is scalable and agile. You can bundle new releases with their infrastructure changes and scale systems up and down with your pipelines. Your teams can share, review, and track IaC using the same tools you use for the rest of your software, like Git. This makes it easier for your teams to collaborate on software and infrastructure projects. Declarative vs Imperative Programming IaC supports both declarative and imperative coding constructs. Learning these approaches helps you pick the right tool for your infrastructure. Declarative As the name implies, declarative coding declares the desired state of an object. So in IaC, you use a declarative tool to define the state of your systems and it handles the details. Two popular declarative IaC tools are Terraform and Puppet. Imperative Imperative programming uses step-by-step instructions to complete a task. It’s the coding style used by languages like Python and Go. In imperative IaC, you define the actions required to bring a system to its desired state. Chef is an example of an imperative IaC tool, while Ansible combines imperative and declarative approaches in its Domain Specific Language (DSL). When you create your infrastructure, you can choose between using immutable or mutable approaches. Mutable vs Immutable Infrastructure Immutable IaC If something is immutable, that means you can't change it. So, if you need to update a setting or add something to it, you must create a new one and replace the outdated copy. Docker containers are immutable. If you want to preserve a container's state across sessions, you need to externalize it. This is often done by connecting it to a persistent filesystem. DZone’s previously covered how engineering teams can approach container security. Kubernetes, since it is based on immutable containers, treats its applications as immutable, too. Deployments entail creating new container sets. Learn how to setup a CI/CD pipeline with Kubernetes. Terraform treats most infrastructure as immutable. When you apply a configuration change it will create a new instance and destroy the outdated one. Related Tutorial: How to build docker images using Jenkins pipelines. Advantages At first glance, this approach seems slow and wasteful. But, like immutable programming, immutable infrastructure presents several important advantages. Consistency — Immutable infrastructure, by definition, stays the way you created it. It remains consistent. You can easily restore or rebuild infrastructure with your IaC tools, too. Auditability — With IaC and immutable infrastructure, your source files are an accurate audit trail for the state of your systems. Fewer errors — When you combine immutability with IaC, you only change infrastructure via code. So, policies like pull requests and audit trails reduce errors. Properly implemented IaC reduces mistakes. Disadvantages But like any other methodology, there are disadvantages you need to weigh against the benefits. Deployment time — The time you need to deploy systems increases with their number and complexity. No small fixes — Deployments for immutable infrastructure are all or nothing; even a small fix requires a complete deployment. Higher resource utilization — Most deployments involve deploying a new instance, followed by a cutover from the old to the new. Depending on the system, this can require significant system resources. Mutable IaC Mutable is the exact opposite of immutable. If something is mutable, that means you can update it after you create it. For instance, if you need to change the amount of memory in a cloud instance, you can apply the change to your existing system. Cloud virtual machines like Amazon Elastic Compute systems are mutable by default. Unlike containers, you can reconfigure them without creating new ones. Updating operating systems via package managers like apt and dnf are examples of mutable infrastructure. Ansible and Chef are often used as mutable IaC tools, using their playbooks and cookbooks to update system configurations based on programmatic instructions. Advantages Mutable infrastructure has several important advantages over an immutable approach. You can update your infrastructure quickly. For example, you can apply the latest security patches to your systems as a discrete operation. Mutable deployments are not all or nothing. You can tailor the scope and timing of updates to individual systems and applications. This lowers the risk of each deployment and simplifies scheduling. Deployment times are not linked to the size and complexity of your systems. Disadvantages The flexibility offered by mutable infrastructure does come at a cost, though. Unlike with immutable tools, your mutable IaC code represents the changes it applied rather than the complete system state. This makes audits more difficult. Mutable systems are prone to configuration drift. Mutable deployments may be more complex than immutable and pose a greater risk of failure. The choice between immutable and mutable IaC depends on your specific requirements and the nature of your infrastructure. Many organizations choose an integrated approach, making some systems mutable and others immutable. Open Source IaC Tools Open-source IAC tools are an important part of the IaC community. These tools harness the power of community-driven development, flexibility, and extensibility. They're excellent choices for automating your infrastructure provisioning and management. Terraform is a tool for describing cloud infrastructure in declarative code. It supports multiple cloud platforms and services. So, you can use it to provision and manage resources across different providers in a consistent and reproducible manner. In Terraform, infrastructure objects are immutable. OpenTofu is a fork of Terraform, created in response to HashiCorp’s decision to switch Terraform to the Business Source License (BUSL). It’s compatible with Terraform but, as a fork, will head in a different direction with a different feature set. Ansible is an automation tool with a declarative language for defining system configurations. Although, you can use procedural code for many operations, too. It treats infrastructure as mutable, managing it for its entire lifecycle. It’s useful for both configuration management and infrastructure provisioning. One of Ansible’s biggest advantages is it works with Secure Shell instead of an agent. Puppet is an agent-based tool for automating infrastructure configuration and management. It uses a declarative language. One advantage of Puppet’s agent is you can “push” changes to a system on demand, or the agent can “pull” updates on a schedule. Chef is another agent-based IaC tool. It operates similarly to Puppet but employs imperative “cookbooks” to describe infrastructure. They contain the steps required to build and configure your infrastructure. Its Ruby-based DSL is more complicated than Puppet’s, but some users prefer it for configuration management. SaltStack is like Puppet and Chef in that it works with an agent-based model. However, it uses an event-driven model for quickly distributing changes to managed systems. SaltStack uses a declarative DSL to describe system states. Conclusion IaC leverages code and coding practices for defining and managing infrastructure resources. It makes it easy to add scalability, consistency, and efficiency to your systems deployment. This means your team can deploy infrastructure configurations faster and with fewer errors. Terraform, Ansible, Puppet, Chef, and SaltStack are just a few examples of tools that can help streamline your infrastructure provisioning and management. Embrace the power of Infrastructure as Code and revolutionize the way you deploy and manage your systems.
In the dynamic world of online services, the concept of site reliability engineering (SRE) has risen as a pivotal discipline, ensuring that large-scale systems maintain their performance and reliability. Bridging the gap between development and operations, SRE is a set of principles and practices that aims to create scalable and highly reliable software systems. Site Reliability Engineering in Today’s World Site reliability engineering is an engineering discipline devoted to maintaining and improving the reliability, durability, and performance of large-scale web services. Originating from the complex operational challenges faced by large internet companies, SRE incorporates aspects of software engineering and applies them to infrastructure and operations problems. The main goal is to create automated solutions for operational aspects such as on-call monitoring, performance tuning, incident response, and capacity planning. Further Reading: Top Open Source Projects for SREs. What Does a Site Reliability Engineer Do? A site reliability engineer operates at the intersection of software engineering and systems engineering. It was a natural evolutionary role for many database administrators with deeper system administration skills once the modernization to the cloud began. The role of the SRE encompasses: Developing software and writing code for service scalability and reliability Ensuring uptime, maintaining services, and minimizing downtime Incident management, including handling system outages and conducting post-mortems Optimizing on-call duties, balancing responsibilities with proactive engineering Capacity planning, which includes predicting future needs and scaling resources accordingly Site Reliability Engineering Principles The core principles of Site Reliability Engineering (SRE) form the foundation upon which its practices and culture are built. One of the key tenets is automation. SRE prioritizes automating repetitive and manual tasks, which not only minimizes the risk of human error but also liberates engineers to focus on more strategic, high-value work. Automation in SRE extends beyond simple task execution; it encompasses the creation of self-healing systems that automatically recover from failures, predictive analytics for capacity planning, and dynamic provisioning of resources. This principle seeks to create a system where operational work is managed efficiently, leaving SRE professionals to concentrate on enhancements and innovations that drive the business forward. Measurement is another cornerstone of SRE. In the spirit of the adage, "You can't improve what you can't measure," SRE implements rigorous quantification of reliability and performance. This includes defining clear service level objectives (SLOs) and service level indicators (SLIs) that provide a detailed view of a system's health and user experience. By consistently measuring these metrics, SREs make data-driven decisions that align technical performance with business goals. Shared ownership is integral to SRE as well. It dissolves the traditional barriers between development and operations, encouraging both teams to take collective responsibility for the software they build and maintain. This collaboration ensures a more holistic approach to problem-solving, with developers gaining more insight into operational issues and operations teams getting involved earlier in the development process. Lastly, a blameless culture is crucial to the SRE ethos. By treating failures as opportunities for improvement rather than reasons for punishment, teams are encouraged to share information openly without fear. This approach leads to a more resilient organization as it promotes a DevOps culture of transparency and continuous learning. When incidents occur, blameless postmortems are conducted, focusing on what happened and how to prevent it in the future, rather than who caused it. This principle not only enhances the team's ability to respond to incidents but also contributes to a positive and productive work environment. Together, these principles guide SRE teams in creating and maintaining reliable, efficient, and continuously improving systems. The Benefits of Site Reliability Engineering Site Reliability Engineering (SRE) not only improves system reliability and uptime but also bridges the gap between development and operations, leading to more efficient and resilient software delivery. By adopting SRE principles, organizations can achieve a balance between innovation and stability, ensuring that their services are both cutting-edge and dependable for their users. Benefits Drawbacks Improved Reliability: Ensures systems are dependable and trustworthy Complexity: Can be difficult to implement in established systems without proper expertise Efficiency: Automation reduces manual labor and speeds up processes. Resource Intensive: Initially requires significant investment in training and tooling Scalability: Provides essential framework for systems to grow without a decrease in performance Balancing Act: Striking the right balance between new features and reliability can be challenging. Innovation: Frees up engineering time for feature development X Site Reliability Engineering vs DevOps Site Reliability Engineering (SRE) and DevOps are two methodologies that, while converging towards the aim of streamlining software development and enhancing system reliability, adopt distinct pathways to realize these goals. DevOps is primarily focused on melding the development and operations disciplines to accelerate the software development lifecycle. This is achieved through the practices of continuous integration and continuous delivery (CI/CD), which ensure that code changes are automatically built, tested, and prepared for a release to production. The heart of DevOps lies in its cultural underpinnings—breaking down silos, fostering cross-functional team collaboration, and promoting a shared responsibility for the software's performance and health. Learn the Difference: DevOps vs. SRE vs. Platform Engineer vs. Cloud Engineer. SRE, in contrast, takes a more structured approach to reliability, providing concrete strategies and a framework to maintain robust systems at scale. It applies a blend of software engineering principles to operational problems, which is why an SRE team's work often includes writing code for system automation, crafting error budgets, and establishing service level objectives (SLOs). While it encapsulates the collaborative spirit of DevOps, SRE specifically zones in on ensuring system reliability and stability, especially in large-scale operations. It operationalizes DevOps by adding a set of specific practices that are oriented towards proactive problem prevention and quick problem resolution, ensuring that the system not only works well under normal conditions but also maintains performance during unexpected surges or failures. Monitoring, Observability, and SRE Monitoring and observability form the foundational pillars of Site Reliability Engineering (SRE). Monitoring is the systematic process of gathering, processing, and interpreting data to gain a comprehensive view of a system's current health. This involves the utilization of various metrics and logs to track the performance and behavior of the system's components. The primary goal of monitoring is to detect anomalies and performance deviations that may indicate underlying issues, allowing for timely interventions. On the other hand, observability extends beyond the scope of monitoring by providing insights into the system's internal workings through its external outputs. It focuses on the ability to infer the internal state of the system based on data like logs, metrics, and traces, without needing to add new code or additional instrumentation. SRE teams leverage observability to understand complex system behaviors, which enables them to preemptively identify potential issues and address them proactively. By integrating these practices, SRE ensures that the system not only remains reliable but also meets the set business objectives, thereby delivering a seamless user experience. Conclusion Site reliability engineering is essential for businesses that depend on providing reliable online services. With its blend of software engineering and systems management, SRE helps to ensure that systems are not just functional, but are also resilient, scalable, and efficient. As organizations increasingly rely on complex systems to conduct their operations, the principles and practices of SRE will become ever more integral to their success. In crafting this analysis, we've touched on the multifaceted role of SRE in modern web services, its core principles, and the tangible benefits it brings to the table. Understanding the distinction between SRE and DevOps clarifies its unique position in the technology landscape, highlighting how essential the discipline is in achieving and maintaining high standards of reliability and performance in today's digital world.
Boris Zaikin
Lead Solution Architect,
CloudAstro GmBH
Pavan Belagatti
Developer Evangelist,
SingleStore
Alireza Chegini
DevOps Architect / Azure Specialist,
Coding As Creating