DZone Research Report: A look at our developer audience, their tech stacks, and topics and tools they're exploring.
Getting Started With Large Language Models: A guide for both novices and seasoned practitioners to unlock the power of language models.
The topic of security covers many different facets within the SDLC. From focusing on secure application design to designing systems to protect computers, data, and networks against potential attacks, it is clear that security should be top of mind for all developers. This Zone provides the latest information on application vulnerabilities, how to incorporate security earlier in your SDLC practices, data governance, and more.
How to Use Pen Tests to Protect Your Company From Digital Threats
How To Implement Supply Chain Security in Your Organization
Using environment variables to store secrets instead of writing them directly into your code is one of the quickest and easiest ways to add a layer of protection to your projects. There are many ways to use them, but a properly utilized `.env` file is one of the best, and I’ll explain why. They’re Project Scoped Environment variables are a part of every major operating system: Windows, MacOS, and all the flavors of *nix (Unix, BSD, Linux, etc.). They can be set at an operating system level, user level, session level… It gets complicated, and where/how you define them matters to the scope in which they can be accessed. This variety of scopes also creates the distinct possibility of variable collisions. If you’re looking for an environment variable named `API_KEY`, that could be getting re-defined in each scope, and if you’re not steeped in that OS, it’s extra work to be sure you’re not clobbering something someone set at a different scope that some other app or service needs. `.env` files are only consumed at runtime and only in the context of the app that’s consuming them. That prevents them from clobbering any other environment variables on the system that might be consumed outside your app. They Can Be "Ignored" If you’re working on a JavaScript application in Node, you can’t ignore your `index.js` file in the version control system. It contains essential code. But you can set your `.gitignore` file to have the Git system ignore your `.env` file. If you do that from the inception of your repository, you won’t commit secrets to the project’s Git history. A better option is to include a `.sample.env` file that sets the variable names, but only includes dummy data or blanks. People cloning/forking and using the repository can get the secrets via another route, then `cp .sample.env .env` (in a terminal), and assign the real values to the proper variables in the ignored `.env` file. They’re Relocatable While most systems will default to looking for the `.env` file in the root of the app’s primary directory, you can always have it a level or two higher. So, if for example, a server configuration error or code bug leaves it possible to view all the files at the root of your web app as a directory, the `.env` will not be there for easy pickings. This is not an uncommon practice. SSH keys are stored by default at `~/.ssh` (a "hidden" subdirectory of the home directory of the user) on Windows, Mac, and Linux. You do not need to move them into the root directory of a project that uses them. A Quick .env Demo in Node Let’s say your working directory for the app you’re building is `~/Documents/work/projects/games/tictactoe` and `tictactoe` is the root directory for the app you’re building and your Git repository. You can put the `.env` in the next directory up, `games`. And while we generally call the file type `.env`, you can call it `.toecreds` if you want to make it a distinct file that other processes would never even think to touch. We'll use that in the demo. Here’s how you’d do that in Node.js. In your `games/tictactoe` directory, `npm init` (go with the defaults) and then `npm install dotenv`. Create your `.toecreds` file in the `games` directory. Fill the .toecreds file with information in the following format `VARIABLE_NAME=VALUE` (no spaces). You can also start a line with # for a comment. JavaScript ```shell # Leaderboard SaaS LEADERBOARD_ENDPOINT=https://example.com/leaderboard/v1 LEADERBOARD_KEY=jknfwgfprgmerg… ``` At the top of your `index.js` (or whatever file is your launchpoint) in `games/tictactoe`, include the following lines: JavaScript ```javascript require('dotenv').config({ path: '../.toecreds' }) console.log(process.env.LEADERBOARD_ENDPOINT) ``` Run your index.js and the endpoint URL will be output to the terminal. Meanwhile, the environment variables you set in it will not be available from the terminal and the file lives a level above your repository and can't accidentally be swept up if you misconfigure `.gitignore`. Try adding a long timeout to the script and then running `node index.js &` to return control back to the terminal after invoking the script. While the script is running in that shell session, the environment variables available to the shell still do not contain the secrets. They are scoped to your running application. You can have dev, test, and prod credential sets, having your CI/CD tooling pull the correct keys for the deployment target from a secrets manager and write the `.toecreds` (or `.env`) file to the same relative directory. And There You Have It The use of a `.env` file helps you keep your app's secrets from ever being committed to your version control and provides an additional layer of protection against your secrets being discovered by hackers or other prying eyes. It's a great addition to your developer/DevOps toolbox.
PII and Its Importance in Data Privacy In today's digital world, protecting personal information is of primary importance. As more organizations allow their employees to interact with AI interfaces for faster productivity gains, there is a growing risk of privacy breaches and misuse of personally identifiable information like names, addresses, social security numbers, email addresses, and more. Unauthorized exposure or misuse of Personally Identifiable Information (PII) can have severe consequences, such as identity theft, financial fraud, and massive damage to a company's reputation. Developers must, therefore, implement effective measures to detect and redact PII from their databases to comply with data protection regulations and ensure privacy. Detecting Personally Identifiable Information There are two main approaches for identifying Personally Identifiable Information within datasets. First is the use of rule-based systems. This approach involves creating specific rules and patterns that check for the presence of PII in a given data collection. While less sophisticated than AI-based models, rule-based systems can effectively capture popular PII formats and structures. A good example is using a simple RegEx pattern to detect phone numbers in JavaScript: JavaScript /^(?:\(\d{3}\)\s?|\d{3}-|\d{3}\s?)\d{3}-?\s?\d{4}$/ function detectPhoneNumber(phoneNumber) { const phoneRegex = /^(?:\(\d{3}\)\s?|\d{3}-|\d{3}\s?)\d{3}-?\s?\d{4}$/; return phoneRegex.test(phoneNumber); } Let's test the above function with a couple of different phone number formats. JavaScript console.log(detectPhoneNumber("123-456-7890")); // true console.log(detectPhoneNumber("(123) 456-7890")); // true console.log(detectPhoneNumber("123 456 7890")); // true console.log(detectPhoneNumber("1234567890")); // true The other approach involves the use of machine learning models. These models, like spaCy, are trained to recognize patterns and structures that indicate the presence of PII. By leveraging these models, you can create robust PII detection systems that can quickly scan through large volumes of data. Overview of AI's Role in PII Detection and Redaction In today's business environment, where there is an increasing amount of data collected and shared, AI-powered solutions, such as Amazon Comprehend, Microsoft Presidio, and Google DLP (Data Loss Prevention), can play a crucial role in enhancing the accuracy of data privacy and significantly reducing the time and effort involved in this process. PII Detection Using Amazon Comprehend Amazon Comprehend is a powerful AI service for PII detection. It uses natural language processing (NLP) techniques to analyze text and identify PII. Here is a simple PII detection example using Amazon Comprehend's `detect-pii-entities` CLI functionality: Note: You can find installation instructions here. Shell aws comprehend detect-pii-entities \ --text "Dr. Emily Johnson recently visited our clinic. Her contact number is (555) 123-4567, and her email is emily.johnson@example.com. She lives at 456 E m Street, Springfield, IL 62704." \ --language-code en When you successfully run the command, it responds with an object containing any potentially sensitive information detected, accompanied by a corresponding detection score. PII Redaction Using Microsoft Presidio In addition to detection, organizations must redact PII from their data to ensure privacy protection. All three AI solutions previously mentioned from Amazon, Google, and Microsoft offer capabilities for detecting and redacting Personally Identifiable Information (PII). Let's take a look at the Microsoft Presidio. Like the AWS Comprehend, it uses NLP techniques not only to detect but also to help anonymize sensitive data in text and images. Below is a basic example of integrating Microsoft Presidio for PII redaction using Python. Step 1: Installation Python pip install presidio-analyzer pip install presidio-anonymizer python -m spacy download en_core_web_lg Step 2: Detection and Redaction (Anonymization) Python from presidio_analyzer import AnalyzerEngine from presidio_anonymizer import AnonymizerEngine text = "Contact me at (555) 123-4567 for more information." #load the analyzer analyzer = AnalyzerEngine() # Call the analyzer to get results results = analyzer.analyze(text=text, entities=["PHONE_NUMBER"], language='en') print(results) # the analyzer results are passed to the AnonymizerEngine for redaction(anonymization) anonymizer = AnonymizerEngine() anonymized_text = anonymizer.anonymize(text=text, analyzer_results=results) print(anonymized_text.text) If you want to see more examples, you can find them in the official documentation. Best Practices and Ethical Considerations in Using AI for PII Protection When integrating AI solutions for PII detection and redaction, you should consider the following best practices for optimal results. 1. Classification of Datasets You should first map and classify all data sources to streamline implementation and prioritize areas needing attention. 2. Customization and Fine-Tuning of Existing AI Models While off-the-shelf AI solutions offer remarkable capabilities, customizing and fine-tuning the models according to an organization's specific PII detection needs can be highly beneficial. 3. Continuous Monitoring and Auditing Continuous monitoring and auditing of configured AI solutions is essential to identify any anomalies or gaps in privacy protection. Additionally, there should be comprehensive employee PII training programs and a plan for expanding the current PII setup as the volume and diversity of data grows. There are also ethical considerations that developers should keep in mind, like fairness and bias, transparency, confidentiality, consent, and data ownership. Conclusion In conclusion, leveraging AI solutions for PII detection and redaction is an impressive step forward in the ongoing effort to safeguard privacy. With advanced AI capabilities from platforms like Amazon Comprehend and Microsoft Presidio, organizations can effectively identify and redact PII, reducing the risk of privacy breaches and enhancing data security overall. Lastly, developers must stay up-to-date with the latest AI developments and have contingency plans to adapt their privacy protection strategies. References Microsoft Presidio Documentation Amazon Comprehend Documentation Google Cloud Data Loss Prevention (Cloud DLP) Documentation
Extended Berkeley Packet Filter (eBPF) is a programming technology designed for the Linux operating system (OS) kernel space, enabling developers to create efficient, secure, and non-intrusive programs. Unlike its predecessor, the Berkeley Packet Filter (BPF), eBPF allows the execution of sandboxed programs in privileged contexts, such as the OS kernel, without the need to modify kernel source code or disrupt overall program execution. This technology expands the features of existing software at runtime, facilitating tasks like packet filtering, high-performance analyses, and the implementation of firewalls and debugging protocols in both on-site data centers and cloud-native environments. While Dynamic Linker Hijacking is frequently utilized by malware to establish persistence on a system, eBPF can effectively monitor attempts of Dynamic Linker Hijacking, with a specific emphasis on modifications to the /etc/ld.so.preload file. We'll showcase the usage of eBPF to intercept relevant syscalls and explain how preloaded libraries are typically used by malware to inject arbitrary code into the execution flow of trusted programs. Note: This article is primarily created to showcase eBPF usage and its capabilities. The provided solution is not intended as a serious security solution but rather as an educational example. Please also note that there are easier ways to track file modifications — e.g., you can simply use tail or inotifywait. It’s also pretty untypical to have /etc/ld.so.preload in a traditional Linux setting, so just checking that the file exists already should fire a couple of red flags. Dynamic Linker Hijacking Intro Dynamic Linker Hijacking is a technique wherein attackers exploit the dynamic linking process to inject malicious code into trusted programs. In Linux, programs rely on shared libraries (shared objects or SO files) for various functionalities. Dynamic linking allows efficient code reuse and resource sharing. Attackers place a malicious shared library in a location where the target program searches for libraries. When the program runs and dynamically links to this shared library, the attacker's code is loaded into the program's memory. Notably, attackers commonly target /etc/ld.so.preload and environment variables like LD_PRELOAD to achieve Dynamic Linker Hijacking. /etc/ld.so.preload is a configuration file on Linux that contains a list of paths to additional shared libraries that should be loaded before all other libraries when a program starts. When a program is executed, the dynamic linker/loader (ld.so) resolves and loads the required shared libraries. By default, the linker looks for libraries in standard paths such as /lib and /usr/lib. However, the presence of the /etc/ld.so.preload file allows users to extend this behavior. In the context of Dynamic Linker Hijacking, attackers may manipulate /etc/ld.so.preload to preload a malicious shared library before the legitimate libraries. This way, when a trusted program starts, the attacker's code is loaded into the program's memory space, enabling unauthorized access or other malicious activities. It's important to note that modifications to /etc/ld.so.preload typically require superuser (root) privileges. Learn more about Linux attack techniques and overwriting preload libraries. eBPF Intro eBPF, or extended Berkeley Packet Filter, originated as an enhancement to the traditional BPF in the Linux kernel. Initially designed for network packet filtering, eBPF has evolved into a versatile technology with broader applications. Look at some examples on Wikipedia. Nowadays, it’s really available to everyone and fairly easy to develop and debug. This is partially thanks to bcc — a set of tools that makes BPF programs easier to write, with kernel instrumentation in C (and includes a C wrapper around LLVM), and front-ends in Python and Lua. You can read more about eBPF here and here, or immerse yourself in this beautiful book. The eBPF Verifier Before delving into the main program, it's crucial to understand the role of the eBPF verifier. The eBPF verifier is a component responsible for ensuring the safety and security of eBPF programs. It analyzes the code and enforces rules to prevent potentially unsafe operations, such as accessing forbidden memory regions. It also does some basic checks on program correctness. Let’s take a look at an example. Consider a function that checks if a string ends with a specific suffix. C static bool ends_with(char *str1, char *str2) { int len1 = 0; int len2 = 0; while (str1[len1++]); while (str2[len2++]); if (len2-- > len1--) return false; while (len2 >= 0) { if (str1[len1] != str2[len2]) return false; len1--; len2--; } return true; } If you try to compile it, the verifier will produce an output looking like this: C ; while (str1[len1++]); 69: (71) r2 = *(u8 *)(r2 +0) invalid read from stack R2 off=0 size=1 processed 25993 insns (limit 1000000) max_states_per_insn 4 total_states 1720 peak_states 49 mark_read 7 This is due to potential unsafe memory access: Imagine we’ve passed an incorrect string that is not null-terminated. The code could then potentially overflow. To make the verifier happy, we should add loop bounds like this: while (str1[len1++] && len1 < 256); Monitoring Dynamic Linker Hijacking With eBPF Now, let's look at the main eBPF program that traces the openat syscall and focuses on attempts to modify /etc/ld.so.preload: C #!/usr/bin/python3 import json import ctypes as ct from bcc import BPF, DEBUG_SOURCE # eBPF program code bpf_program = r""" #define O_WRONLY 01 #define O_RDWR 02 #define TEST_PATH_LIMIT 256 static bool ends_with(char *str1, char *str2) { int len1 = 0; int len2 = 0; while (str1[len1++] && len1 < TEST_PATH_LIMIT); while (str2[len2++] && len2 < TEST_PATH_LIMIT); if (len2-- > len1--) return false; while (len2 >= 0) { if (str1[len1] != str2[len2]) return false; len1--; len2--; } return true; } TRACEPOINT_PROBE(syscalls, sys_enter_openat) { char* TRACED_PATH_SUFFIXES[] = { "ld.so.preload" }; char filename[TEST_PATH_LIMIT]; bpf_probe_read_user_str(&filename, sizeof(filename), args->filename); if (!(args->flags & O_WRONLY) && !(args->flags & O_RDWR)) { return 0; } for (int i = 0; i < sizeof(TRACED_PATH_SUFFIXES) / sizeof(TRACED_PATH_SUFFIXES[0]); i++) { if (ends_with(filename, TRACED_PATH_SUFFIXES[i])) { bpf_trace_printk("Catch writing to file: %s\n", filename); return 0; } } return 0; } """ # See debug flags with explanations here: https://github.com/iovisor/bcc/blob/master/src/python/bcc/__init__.py b = BPF(text=bpf_program, debug=DEBUG_SOURCE) print("Started tracing\n") while 1: try: (task, pid, cpu, flags, ts, msg) = b.trace_fields() except ValueError: continue except KeyboardInterrupt: exit(); print(msg) In this program, we use the TRACEPOINT_PROBE macro to intercept the sys_enter_openat syscall. We then check if the file being opened has a suffix matching one of the tracked path suffixes, such as "ld.so.preload." If a match is found, a trace message is printed to the console. Running the eBPF Program To run the eBPF program, save it in a file named monitor_dyn_linker_hijacking.py and execute: sudo python3 monitor_dyn_linker_hijacking.py Now Open a second terminal window and simulate file modification attempts: sudo touch /etc/ld.so.preload Observe the output in the first terminal where the eBPF program is running. Python Started tracing b'Catch writing to file: /etc/ld.so.preload' Conclusion Having showcased that eBPF can effectively monitor Dynamic Linker Hijacking attempts, specifically alterations to the /etc/ld.so.preload file, we invite you to explore further. Now that you've witnessed eBPF's capability to intercept syscalls and observed examples of how malware employs preloaded libraries for code injection into trusted program flows, feel free to experiment with various file modification scenarios. This hands-on exploration will deepen your understanding of eBPF's versatile capabilities in enhancing system security.
In an era where digital threats are constantly evolving, understanding and mitigating these risks is crucial for organizations of all sizes. Threat modeling emerges as a pivotal process in this landscape, offering a structured approach to identify, assess, and address potential security threats. This analysis delves into the intricacies of threat modeling, exploring its mechanisms, methodologies, real-world applications, benefits, and challenges. What Is Threat Modeling, and Why Is It Important? Threat modeling is a proactive approach in cybersecurity, where potential threats and vulnerabilities within an information system are identified and analyzed. It involves a systematic examination of an application, system, or business process to highlight security weaknesses and the potential impact of different threat scenarios. How Does Threat Modeling Work? The process of threat modeling typically follows these steps: Defining security objectives and establishing what needs to be protected Creating an architecture overview, mapping out the system or application architecture Identifying threats and using various techniques to pinpoint potential threats Determining vulnerabilities and assessing where the system might be exploited Documenting and managing risks, as well as developing strategies to mitigate identified risks Threat Modeling Adoption and Implementation The successful adoption of threat modeling within an organization hinges on several critical steps. Initially, it is imperative to focus on training and awareness. This means dedicating time and resources to educate the development team, security personnel, and stakeholders about the significance of threat modeling and the various techniques used to conduct it. A thorough understanding of threat modeling's role in identifying and preempting security vulnerabilities is essential for cultivating a security-conscious culture within the team. Another key aspect is integrating threat modeling into the development lifecycle. By incorporating this process from the early stages of software development, organizations can ensure that security considerations are not an afterthought but a fundamental component of the development process. Embedding threat modeling early on helps to identify potential security issues when they are generally easier and less costly to address. Lastly, the nature of cybersecurity demands that threat models are not static. With the ever-evolving landscape of cyber threats and the ongoing development of software systems, regular reviews are essential. These reviews should be scheduled to update and refine threat models, ensuring they accurately reflect the current threat environment and any changes within the system itself. By regularly revisiting and updating the threat models, organizations can maintain a robust and responsive security posture that adapts to new challenges as they arise. Incorporating these practices into an organization's security strategy is not a one-time task but a continuous effort that requires commitment and adaptability. As threats evolve and systems grow more complex, the processes of training, integrating, and reviewing must also progress to keep pace with the dynamic nature of cybersecurity. DZone’s previously covered how to implement Oauth2 security in microservices. Threat Modeling Methodologies Several methodologies serve as the backbone for the threat modeling process, each with its unique focus and structure to guide security experts in identifying and mitigating potential threats. These methodologies provide frameworks that outline how to approach the complex task of threat analysis, ensuring a systematic and thorough examination of security risks. These include, but aren’t limited to: STRIDE identifies threats based on steps designed to identify what can go wrong on an application, system, IT landscape, or business process being threat modeled. It categorizes threats into six distinct types — Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, and Elevation of Privilege — providing a clear lens through which to view potential vulnerabilities. PASTA (Process for Attack Simulation and Threat Analysis) is a risk-centric methodology that always ties back to the business process while simulating and testing the viability of threats. It adopts a risk-centric approach, prioritizing threats based on their likelihood and potential impact. TRIKE is an open-source requirement-modeling methodology focused on defining acceptable levels of risk while assigning levels of risks to determine if the risk is acceptable to assigned stakeholders. It focuses on defining acceptable risk levels and aligning security efforts with them. These methodologies often incorporate elements of asset identification, threat enumeration, and vulnerability mapping, alongside strategies for mitigation and risk management. By following these established guides, organizations can create comprehensive threat models tailored to their specific systems, applications, and operational contexts. The choice of methodology depends on various factors, including the type of system under review, the resources available, and the expertise of the team responsible for the threat modeling exercise. Related Article: Importance of learning java for cybersecurity. Threat Modeling Examples In the digital realm, where threats loom large over various sectors, real-world applications of threat modeling are both diverse and critical. For e-commerce platforms, threat modeling plays a key role in identifying and mitigating risks such as data breaches and payment fraud. These platforms handle a wealth of sensitive customer information and financial details, making them a prime target for cybercriminals. Through threat modeling, e-commerce businesses can foresee potential attack vectors, such as SQL injection or cross-site scripting, that could lead to unauthorized access to customer data or financial theft. By preemptively recognizing these threats, e-commerce sites can implement robust encryption, secure payment gateways, and continuous monitoring systems to protect both their assets and customer trust. Image 1: Example of retail threat model Financial systems also greatly benefit from threat modeling, with a sharp focus on transaction security and data integrity. The financial industry is under constant threat from sophisticated attack methods aimed at intercepting transactions or manipulating data for fraud. Threat modeling helps financial institutions map out the flow of sensitive data and pinpoint weaknesses that might be exploited by attackers to alter transaction details or siphon funds. These insights are crucial for developing layered security measures, establishing strict authentication protocols, and ensuring the integrity of financial transactions from start to finish. Healthcare applications, on the other hand, must address not only the security of sensitive health information but also compliance with stringent regulations such as the Health Insurance Portability and Accountability Act (HIPAA) in the United States. Threat modeling in healthcare can reveal how personal health information might be exposed or compromised through various channels, whether through insider threats, unsecured endpoints, or third-party services. By understanding these potential threats, healthcare providers can implement controls like access management, data encryption, and regular audits to ensure that patient data is handled securely and in compliance with legal and ethical standards. In this way, threat modeling is indispensable for upholding the confidentiality, availability, and integrity of health information systems. Threat Modeling Benefits and Challenges Threat modeling has significant benefits and challenges, the following table demonstrates some of these and how they correspond to each other. Benefit Challenge Proactive Security Posture: Helps in anticipating and mitigating potential threats Resource Intensive: Requires time, expertise, and often specific tools Informed Decision Making: Provides a framework for making security-related decisions Evolving Threat Landscape: Keeping the threat model updated with emerging threats creates challenges. Compliance and Trust: Assists in meeting regulatory requirements and building customer trust Complexity in Large Systems: More complex in larger, more intricate systems and organizations Conclusion Threat modeling is an essential component in the arsenal of modern cybersecurity strategies. While it comes with its set of challenges, its benefits in identifying and mitigating risks are invaluable. As cyber threats continue to evolve, so too must our approaches to understanding and combating them. Whether for a cybersecurity professional or an organization striving to fortify its digital defenses, understanding threat modeling is a step toward a more secure operational environment.
The National Institute of Standards and Technology (NIST) Cybersecurity Framework (CSF) stands as a beacon of guidance for organizations navigating the intricate landscape of cybersecurity. In an era where cloud computing has become integral to software development, the fusion of NIST CSF principles with cloud security is paramount. This comprehensive guide is tailored specifically for developers, offering insights and actionable recommendations to fortify their endeavors in the cloud. Core Functions As developers navigate through the core functions, they gain a holistic understanding of how the NIST CSF can be applied to enhance security practices in the cloud. The guidance provided empowers developers to integrate security seamlessly into the software development lifecycle for cloud-based applications, fostering a resilient and secure computing environment. Identify: The "Identify" function involves understanding and managing cybersecurity risks in the cloud. Developers will gain insights into how to identify assets, data, and potential risks specific to cloud environments. Protect: The "Protect" function focuses on implementing safeguards to ensure the security of cloud-based applications and data. Developers will receive guidance on protective measures such as encryption, access controls, and secure configurations tailored for cloud environments. Detect: Developers learn strategies for identifying and monitoring cybersecurity events and anomalies in cloud systems. This includes insights into cloud-specific threat detection methods, monitoring practices, and tools to enhance the ability to promptly detect and respond to security incidents in a cloud environment. Respond: The "Respond" function addresses the actions developers should take in the event of a cybersecurity incident in the cloud. This provides guidance on creating cloud-specific incident response plans, coordination strategies, and best practices for effectively responding to and containing incidents, minimizing potential damages. Recover: The "Recover" function helps explore strategies for recovering from cybersecurity incidents in the cloud. This includes continuity and recovery planning specific to cloud environments Applicability of NIST CSF to Cloud Environments It is imperative for developers to understand the specific cloud service and deployment models in use, aligning the NIST CSF's principles with the nuances of the cloud environment. Data classification and sensitivity play a pivotal role, guiding developers to implement security controls that safeguard crucial information. A comprehensive risk assessment, tailored to the intricacies of cloud computing, must be conducted, prioritizing risks identified in alignment with the CSF's "Identify" function. Integration with cloud security standards, diligent identity and access management practices, encryption for data protection, and incident response planning tailored for the cloud are paramount. Continuous monitoring tools and cross-functional collaboration ensure real-time detection and response, while adherence to regulatory compliance standards is vital for robust cloud security. By addressing these considerations, developers can seamlessly apply the NIST CSF to cloud environments, fortifying their applications against evolving cybersecurity challenges. Guidelines and Best Practices Identify (NIST CSF) in Cloud Environments Identifying assets, data, and risks specific to cloud environments requires a strategic approach. Here are the top three guidelines: Comprehensive Cloud Asset Inventory: Maintain an up-to-date inventory of all cloud assets, including virtual machines, storage, databases, and network components. Leverage automated tools and cloud provider resources for accurate asset tracking. Data Classification and Mapping: Classify data based on sensitivity and relevance to the organization. Map data flows within the cloud ecosystem to understand how data moves between different cloud services and storage. Thorough Risk Assessment: Conduct regular and thorough risk assessments that account for both general cybersecurity risks and those specific to cloud environments. Prioritize risks based on their potential impact on cloud assets and data. Protect (NIST CSF) in Cloud Security By implementing protective measures, organizations can significantly enhance the security of their cloud-based systems, safeguarding against unauthorized access and potential data breaches. Here are the top three recommendations: Encryption Practices: NIST CSF emphasizes the importance of robust encryption practices to secure data both in transit and at rest within the cloud. Implementing strong encryption algorithms and adhering to recommended cryptographic protocols are key components of safeguarding sensitive information. Access Controls and Least Privilege: NIST advocates for the principle of least privilege and effective access controls. Ensuring that users have the minimum necessary access for their roles helps mitigate the risk of unauthorized access to cloud-based applications and data. Cloud provider Identity and Access Management (IAM) tools can be leveraged for precise control. Secure Configurations and Continuous Monitoring: NIST encourages organizations to follow secure configuration best practices provided by cloud services. Regularly auditing and updating configurations help address potential vulnerabilities. Additionally, continuous monitoring is crucial to detect and respond to security incidents promptly, aligning with NIST's emphasis on the "Detect" function. Detect (NIST CSF) in Cloud Environments Detecting cybersecurity events and anomalies in cloud systems requires strategic approaches to promptly identify potential threats. Here are the top three strategies: Implement Advanced Threat Detection Tools: Deploy advanced threat detection tools designed for cloud environments. These tools often leverage machine learning, behavioral analytics, and anomaly detection to identify unusual patterns of behavior that may indicate a cybersecurity incident. Utilize Cloud-Native Security Services: Leverage cloud-native security services provided by cloud service providers. These services often include features such as cloud security information and event management (SIEM), which can aggregate and analyze logs and events from various cloud resources to detect anomalies. Establish Continuous Monitoring: Implement continuous monitoring practices to track activities and events in real-time within the cloud environment. This proactive approach allows organizations to quickly detect deviations from normal behavior and respond promptly to potential security threats. Respond (NIST CSF) to Cloud Security Incidents Organizations can develop a robust and tailored approach to responding to and containing cybersecurity incidents in cloud environments, fostering resilience and minimizing potential damages. Develop and Test Cloud-Specific Incident Response Plans: NIST CSF emphasizes the importance of creating incident response plans tailored specifically for cloud environments. These plans should outline procedures for identifying, responding to, and recovering from incidents in the cloud. Regular testing and simulation exercises ensure the effectiveness of these plans. Coordinate and Collaborate Across Stakeholders: The framework underscores the need for coordination and collaboration among various stakeholders, including cloud service providers, internal teams, and external partners. Establishing clear communication channels and response protocols enhances the organization's ability to contain and mitigate incidents in a timely manner. Leverage Continuous Monitoring for Early Detection: NIST encourages the implementation of continuous monitoring practices in the cloud. Continuous monitoring helps detect cybersecurity incidents at an early stage, enabling a faster response and containment. Integrating automated tools for real-time threat detection enhances the organization's overall incident response capabilities. Recover (NIST CSF) in Cloud Security To strengthen the ability to recover from cybersecurity incidents in the cloud, the following approaches ensure a resilient and adaptive cybersecurity posture. Develop and Test Cloud-Specific Recovery Plans: NIST CSF emphasizes the importance of having recovery plans specifically tailored for cloud environments. These plans should outline steps for restoring systems and data in the cloud after a cybersecurity incident. Regular testing and validation of these plans ensure their effectiveness and the organization's ability to resume normal operations. Prioritize Cloud-Based Continuity and Resilience: The framework encourages organizations to prioritize continuity and resilience planning in the cloud. This involves identifying critical cloud services and data, implementing backup and redundancy measures, and ensuring the availability of resources needed for recovery. By prioritizing cloud-based continuity, organizations enhance their ability to recover swiftly from incidents. Integrate Lessons Learned for Continuous Improvement: NIST CSF advocates for a continuous improvement mindset in the recovery phase. Organizations should conduct post-incident reviews, analyze the effectiveness of their response and recovery efforts, and integrate lessons learned into future recovery plans. This iterative approach helps refine and enhance recovery capabilities over time. Integrating NIST CSF Into the SDLC for Cloud-Based Applications Integrating the NIST CSF into the Software Development Lifecycle (SDLC) for cloud-based applications is paramount. Begin by embedding security seamlessly across all SDLC phases, from planning to maintenance, treating it as an integral aspect of development. Align NIST CSF functions with SDLC stages, mapping Identify, Protect, Detect, Respond, and Recover to corresponding development steps. Implement secure coding practices specific to cloud environments, addressing encryption, access controls, and configurations. Integrate automated security testing into the continuous integration/continuous deployment (CI/CD) pipeline to identify vulnerabilities early. Provide comprehensive developer training on NIST CSF principles, fostering a deep understanding of security best practices in cloud contexts. Conduct regular security reviews and assessments aligned with NIST CSF recommendations, emphasizing continuous monitoring for cloud security throughout the SDLC. Clearly document and communicate security requirements, ensuring adherence and awareness among development teams. This holistic approach enhances the security resilience of cloud-based applications and mitigates risks throughout the development lifecycle. Challenges and Considerations Applying the NIST CSF in cloud security introduces specific challenges and considerations that organizations must address to ensure effective implementation. Here are some challenges and potential ways to overcome them: Dynamic Cloud Environment: Challenge: Cloud environments are dynamic and scalable, making it challenging to maintain a comprehensive and up-to-date inventory of assets. Overcoming Obstacles: Leverage automated tools provided by cloud service providers for asset discovery. Implement continuous monitoring to dynamically adjust security measures in response to changes in the cloud environment. Shared Responsibility Model: Challenge: Cloud security operates under a shared responsibility model where both the cloud service provider and the customer have specific security responsibilities. Coordinating these responsibilities can be complex. Overcoming Obstacles: Clearly define and understand the delineation of responsibilities between the cloud provider and the organization. Implement strong communication channels to address security aspects collaboratively. Data Privacy and Compliance: Challenge: Navigating data privacy regulations and compliance requirements in different jurisdictions can be complex, especially when data is stored and processed in the cloud. Overcoming Obstacles: Stay informed about regional and industry-specific compliance requirements. Work closely with legal and compliance teams to ensure cloud security practices align with regulatory standards. Integration With DevOps: Challenge: Integrating NIST CSF into DevOps processes requires aligning security practices with the speed and agility of continuous integration and continuous deployment (CI/CD) pipelines. Overcoming Obstacles: Implement DevSecOps practices to embed security into the development lifecycle seamlessly. Use automation to incorporate security checks into CI/CD pipelines without impeding development speed. Vendor Management: Challenge: Organizations often rely on multiple cloud service providers, each with its security measures and interfaces. Overcoming Obstacles: Standardize security controls across cloud providers when possible. Develop a consistent approach to vendor risk management, including security assessments and due diligence. Identity and Access Management (IAM): Challenge: Managing user identities and access controls in dynamic cloud environments can be challenging, leading to the risk of unauthorized access. Overcoming Obstacles: Implement robust IAM practices, such as the principle of least privilege, multi-factor authentication, and regular reviews of access permissions. Leverage cloud provider IAM tools effectively. Incident Response in the Cloud: Challenge: Adapting incident response plans to the cloud requires considerations for the unique characteristics of cloud environments. Overcoming Obstacles: Develop and test cloud-specific incident response plans. Ensure coordination with the cloud service provider and incorporate cloud-native monitoring and detection tools. Future Trends The evolution of the NIST CSF to address future challenges is likely to involve updates and enhancements to keep pace with the dynamic nature of cybersecurity threats and technology advancements. Integration With Emerging Technologies: As new technologies, such as artificial intelligence, machine learning, and the Internet of Things (IoT), become more prevalent, the NIST CSF may evolve to provide specific guidance on integrating these technologies securely. Deeper Focus on Supply Chain Security: This may involve providing organizations with guidelines for assessing and mitigating cybersecurity risks throughout the supply chain, from third-party vendors to interconnected partners Enhanced Threat Intelligence Integration: The NIST may evolve to provide guidance on leveraging threat intelligence feeds, collaborating with cybersecurity information-sharing organizations, and implementing effective threat detection and response strategies. International Collaboration and Standardization: The NIST CSF may continue to foster international collaboration and standardization efforts. This could involve aligning the framework with global cybersecurity standards and frameworks, promoting interoperability, and addressing the challenges of cross-border cybersecurity threats. Cloud-Native Security Considerations: With the continued adoption of cloud computing, the NIST CSF might offer more granular guidance on cloud-native security considerations. This could include recommendations for securing serverless architectures, containerized applications, and other cloud-native technologies, as well as addressing the challenges associated with shared responsibility models.
Over the past few years, AI has steadily worked its way into almost every part of the global economy. Email programs use it to correct grammar and spelling on the fly and suggest entire sentences to round out each message. Digital assistants use it to provide a human-like conversational interface for users. You encounter it when you reach out to any business's contact center. You can even have your phone use AI to wait on hold for you when you exhaust the automated support options and need a live agent instead. It's no wonder, then, that AI is also already present in the average software developer's toolkit. Today, there are countless AI coding assistants available that promise to lighten developers' loads. According to their creators, the tools should help software developers and teams work faster and produce more predictable product outcomes. However, they do something less desirable, too—introduce security flaws. It's an issue that software development firms and solo coders are only beginning to come to grips with. Right now, it seems there's a binary choice. Either use AI coding assistants and accept the consequences, or forego them and risk falling behind the developers that do use them. Right now, surveys indicate that about 96% of developers have already chosen the former. But what if there was another option? What if you could mitigate the risks of using AI coding assistants without harming your output? Here's a simple framework developers can use to pull that off. Evaluate Your AI Tools Carefully The first way to mitigate the risks that come with AI coding assistants is to thoroughly investigate any tool you're considering before you use it in production. The best way to do this is to use the tool in parallel with a few of your development projects to see how the results stack up to your human-created code. This will provide you an opportunity to assess the tool's strengths and weaknesses and to look for any persistent output problems that might make it a non-starter for your specific development needs. This simple vetting procedure should let you choose an AI coding assistant that's suited to the tasks you plan to give it. It should also alert you to any significant secure coding shortcomings associated with the tool before it can affect a live project. If those shortcomings are insignificant, you can use what you learn to clean up any code that comes from the tool. If they're significant, you can move on to evaluating another tool instead. Beef up Your Code Review and Validation Processes Next, it's essential to beef up your code review and validation processes before you begin using an AI coding assistant in production. This should include multiple static code analyses passed on all the code you generate, especially any that contain AI-generated code. This should help you catch the majority of inadvertently introduced security vulnerabilities. It should also give your human developers a chance to read the AI-generated code, understand it, and point out any obvious issues with it before moving forward. Your code review and validation processes should also include dynamic testing as soon as each project reaches the point that it's feasible. This will help you evaluate the security of your code as it exists in the real world, including any user interactions that could introduce additional vulnerabilities. Keep Your AI Tools Up to Date Finally, you should create a process that ensures you're always using the latest version of your chosen AI tools. The developers of AI coding assistants are always making changes aimed at increasing the reliability and security of the code their tools generate. It's in their best interest to do so since any flawed code traced back to their tool could lead to developers dropping it in favor of a competitor. However, you shouldn't blindly update your toolset, either. It's important to keep track of any updates to your AI coding assistant change. You should never assume that an updated version of the tool you're using will still be suited for your specific coding needs. So, if you spot any changes that might call for a reevaluation of the tool, that's exactly what you should do. If you can't afford to be without your chosen AI coding assistant for long enough to repeat the vetting process you started with, continue using the older version. However, you should have the new version perform the same coding tasks and compare the output. This should give you a decent idea of how an update's changes will affect your final software products. The Bottom Line Realistically, AI code generation isn't going away. Instead, it likely won't be long before it's an integral part of every development team's workflow. However, we've not yet reached the point where human coders should blindly trust the work product of their AI counterparts. By taking a cautious approach and integrating AI tools thoughtfully, developers should be able to reap the rewards of these early AI tools while insulating themselves from their very real shortcomings.
User management is required for most web applications, but building it isn't always an easy task. Many developers work around the clock to ensure their app is secure by seeking out individual vulnerabilities to patch. Luckily, you can increase your own efficiency by implementing OAuth 2.0 with Spring Security and Spring Boot. The process gets even easier by integrating with Okta on top of Spring Boot. In this tutorial, you’ll first build an OAuth 2.0 web application and authentication server using Spring Boot and Spring Security. OAuth 2.0 is an open standard authorization protocol that enables secure and delegated access to resources on the web. It allows users to grant limited access to their resources, such as profiles or data, to third-party applications without sharing their credentials. OAuth 2.0 is widely used for authentication and authorization in modern web and mobile applications. Spring Boot is a Java-based open-source framework that simplifies the process of building, deploying, and managing production-ready applications. It provides a convention-over-configuration approach, minimizing the need for boilerplate code and configuration, allowing developers to focus on writing business logic. With embedded servers and a variety of built-in tools, Spring Boot makes it easy to create standalone, production-grade Spring-based applications. DZone’s previously covered how to secure a Spring Boot application with Keycloak. Spring Security is a powerful and customizable authentication and access control framework for Java applications, particularly those built on the Spring framework. It provides comprehensive security services for Java EE-based enterprise software applications. With Spring Security, developers can easily integrate authentication and authorization mechanisms into their applications, protecting resources and ensuring secure interactions. The framework supports various authentication providers, including LDAP, OAuth, and custom implementations, making it adaptable to a variety of security requirements. Once we've secured our Spring Boot application with Spring Boot and Spring Security, we’ll use Okta to get rid of our self-hosted authentication server and simplify the application even more. Let’s get started! Create an OAuth 2.0 Server Start by going to the Spring Initializr and creating a new project with the following settings: Change project type from Maven to Gradle. Change the Group to com.okta.spring. Change the Artifact to AuthorizationServerApplication. Add one dependency: Web. Download the project and copy it somewhere that makes sense on your hard drive. In this tutorial, you’re going to create three different projects, so you might want to create a parent directory, something like SpringBootOAuthsomewhere. You need to add one dependency to the build.gradle file: implementation 'org.springframework.security.oauth:spring-security-oauth2:2.3.3.RELEASE' This adds in Spring’s OAuth goodness. Update the src/main/resources/application.properties to match: server.port=8081 server.servlet.context-path=/auth user.oauth.clientId=R2dpxQ3vPrtfgF72 user.oauth.clientSecret=fDw7Mpkk5czHNuSRtmhGmAGL42CaxQB9 user.oauth.redirectUris=http://localhost:8082/login/oauth2/code/ user.oauth.user.username=Andrew user.oauth.user.password=abcd This sets the server port, servlet context path, and some default values for the in-memory, ad hoc generated tokens the server is going to return to the client, as well as for our user’s username and password. In production, you would need to have a bit more of a sophisticated back-end for a real authentication server without the hard-coded redirect URIs, usernames, and passwords. Update the AuthorizationServerApplication class to add @EnableResourceServer: package com.okta.spring.AuthorizationServerApplication; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; import org.springframework.security.oauth2.config.annotation.web.configuration.EnableResourceServer; @SpringBootApplication @EnableResourceServer public class AuthorizationServerApplication { public static void main(String[] args) { SpringApplication.run(AuthorizationServerApplication.class, args); } } Create a new class AuthServerConfig in the same package as your application class com.okta.spring.AuthorizationServerApplication under src/main/java (from now on please create Java classes in src/main/java/com/okta/spring/AuthorizationServerApplication). This Spring configuration class enables and configures an OAuth authorization server. package com.okta.spring.AuthorizationServerApplication; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.beans.factory.annotation.Value; import org.springframework.context.annotation.Configuration; import org.springframework.security.crypto.bcrypt.BCryptPasswordEncoder; import org.springframework.security.oauth2.config.annotation.configurers.ClientDetailsServiceConfigurer; import org.springframework.security.oauth2.config.annotation.web.configuration.AuthorizationServerConfigurerAdapter; import org.springframework.security.oauth2.config.annotation.web.configuration.EnableAuthorizationServer; import org.springframework.security.oauth2.config.annotation.web.configurers.AuthorizationServerSecurityConfigurer; @Configuration @EnableAuthorizationServer public class AuthServerConfig extends AuthorizationServerConfigurerAdapter { @Value("${user.oauth.clientId}") private String ClientID; @Value("${user.oauth.clientSecret}") private String ClientSecret; @Value("${user.oauth.redirectUris}") private String RedirectURLs; private final PasswordEncoder passwordEncoder; public AuthServerConfig(PasswordEncoder passwordEncoder) { this.passwordEncoder = passwordEncoder; } @Override public void configure( AuthorizationServerSecurityConfigurer oauthServer) throws Exception { oauthServer.tokenKeyAccess("permitAll()") .checkTokenAccess("isAuthenticated()"); } @Override public void configure(ClientDetailsServiceConfigurer clients) throws Exception { clients.inMemory() .withClient(ClientID) .secret(passwordEncoder.encode(ClientSecret)) .authorizedGrantTypes("authorization_code") .scopes("user_info") .autoApprove(true) .redirectUris(RedirectURLs); } } The AuthServerConfig class is the class that will create and return our JSON web tokens when the client properly authenticates. Create a SecurityConfiguration class: package com.okta.spring.AuthorizationServerApplication; import org.springframework.beans.factory.annotation.Value; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import org.springframework.core.annotation.Order; import org.springframework.security.config.annotation.authentication.builders.AuthenticationManagerBuilder; import org.springframework.security.config.annotation.web.builders.HttpSecurity; import org.springframework.security.config.annotation.web.configuration.WebSecurityConfigurerAdapter; import org.springframework.security.crypto.bcrypt.BCryptPasswordEncoder; @Configuration @Order(1) public class SecurityConfiguration extends WebSecurityConfigurerAdapter { @Value("${user.oauth.user.username}") private String username; @Value("${user.oauth.user.password}") private String password; @Override protected void configure(HttpSecurity http) throws Exception { http.requestMatchers() .antMatchers("/login", "/oauth/authorize") .and() .authorizeRequests() .anyRequest().authenticated() .and() .formLogin().permitAll(); } @Override protected void configure(AuthenticationManagerBuilder auth) throws Exception { auth.inMemoryAuthentication() .withUser(username) .password(passwordEncoder().encode(password)) .roles("USER"); } @Bean public BCryptPasswordEncoder passwordEncoder() { return new BCryptPasswordEncoder(); } } The SecurityConfiguration class is the class that actually authenticates requests to your authorization server. Notice near the top where it’s pulling in the username and password from the application.properties file. Lastly, create a Java class called UserController: package com.okta.spring.AuthorizationServerApplication; import org.springframework.web.bind.annotation.GetMapping; import org.springframework.web.bind.annotation.RestController; import java.security.Principal; @RestController public class UserController { @GetMapping("/user/me") public Principal user(Principal principal) { return principal; } } This file allows the client apps to find out more about the users who authenticate with the server. That’s your authorization server! Not too bad. Spring Boot makes it pretty easy. Four files and a few properties. In a little bit, you’ll make it even simpler with Okta, but for the moment, move on to creating a client app you can use to test the auth server. Start the authorization server: ./gradlew bootRun Wait a bit for it to finish running. The terminal should end with something like this: ... 2019-02-23 19:06:49.122 INFO 54333 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8081 (http) with context path '/auth ' 2019-02-23 19:06:49.128 INFO 54333 --- [ main] c.o.s.A.AuthorizationServerApplication : Started AuthorizationServerApplication in 3.502 seconds (JVM running for 3.945) NOTE: If you get an error about JAXB (java.lang.ClassNotFoundException: javax.xml.bind.JAXBException), it’s because you’re using Java 11. To fix this, add JAXB to your build.gradle. implementation 'org.glassfish.jaxb:jaxb-runtime' Build Your Client App Back to Spring Initializr. Create a new project with the following settings: The project type should be Gradle (not Maven). Group: com.okta.spring Artifact: SpringBootOAuthClient Add three dependencies: Web, Thymeleaf, and OAuth2 Client. Download the project, copy it to its final resting place, and unpack it. This time you need to add the following dependency to your build.gradle file: implementation 'org.thymeleaf.extras:thymeleaf-extras-springsecurity5:3.0.4.RELEASE' Rename the src/main/resources/application.properties to application.yml and update it to match the YAML below: server: port: 8082 session: cookie: name: UISESSION spring: thymeleaf: cache: false security: oauth2: client: registration: custom-client: client-id: R2dpxQ3vPrtfgF72 client-secret: fDw7Mpkk5czHNuSRtmhGmAGL42CaxQB9 client-name: Auth Server scope: user_info provider: custom-provider redirect-uri-template: http://localhost:8082/login/oauth2/code/ client-authentication-method: basic authorization-grant-type: authorization_code provider: custom-provider: token-uri: http://localhost:8081/auth/oauth/token authorization-uri: http://localhost:8081/auth/oauth/authorize user-info-uri: http://localhost:8081/auth/user/me user-name-attribute: name Notice that here, you’re configuring the clientId and clientSecret, as well as various URIs for your authentication server. These need to match the values in the other project. Update the SpringBootOAuthClientApplication class to match: package com.okta.spring.SpringBootOAuthClient; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; @SpringBootApplication public class SpringBootOAuthClientApplication { public static void main(String[] args) { SpringApplication.run(SpringBootOAuthClientApplication.class, args); } } Create a new Java class called WebController: package com.okta.spring.SpringBootOAuthClient; import org.springframework.stereotype.Controller; import org.springframework.ui.Model; import org.springframework.web.bind.annotation.RequestMapping; import java.security.Principal; @Controller public class WebController { @RequestMapping("/securedPage") public String securedPage(Model model, Principal principal) { return "securedPage"; } @RequestMapping("/") public String index(Model model, Principal principal) { return "index"; } } This is the controller that maps incoming requests to your Thymeleaf template files (which you’ll make in a second). Create another Java class named SecurityConfiguration: package com.okta.spring.SpringBootOAuthClient; import org.springframework.context.annotation.Configuration; import org.springframework.security.config.annotation.web.builders.HttpSecurity; import org.springframework.security.config.annotation.web.configuration.WebSecurityConfigurerAdapter; @Configuration public class SecurityConfiguration extends WebSecurityConfigurerAdapter { @Override public void configure(HttpSecurity http) throws Exception { http.antMatcher("/**").authorizeRequests() .antMatchers("/", "/login**").permitAll() .anyRequest().authenticated() .and() .oauth2Login(); } } This class defines the Spring Security configuration for your application: allowing all requests on the home path and requiring authentication for all other routes. It also sets up the Spring Boot OAuth login flow. The last files you need to add are the two Thymeleaf template files. A full look at Thymeleaf templating is well beyond the scope of this tutorial, but you can take a look at their website for more info. The templates go into the src/main/resources/templates directory. You’ll notice in the controller above that they’re simply returning strings for the routes. When the Thymeleaf dependencies are included in the build, Spring Boot automatically assumes you’re returning the name of the template file from the controllers, and so, the app will look in src/main/resources/templates for a file name with the returned string plus .html. Create the home template: src/main/resources/templates/index.html: <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Home</title> </head> <body> <h1>Spring Security SSO</h1> <a href="securedPage">Login</a> </body> </html> And the secured template: src/main/resources/templates/securedPage.html: <!DOCTYPE html> <html xmlns:th="http://www.thymeleaf.org"> <head> <meta charset="UTF-8"> <title>Secured Page</title> </head> <body> <h1>Secured Page</h1> <span th:text="${#authentication.name}"></span> </body> </html> I’ll just point out this one line: <span th:text="${#authentication.name}"></span> This is the line that will insert the name of the authenticated user. This line is why you needed the org.thymeleaf.extras:thymeleaf-extras-springsecurity5 dependency in the build.gradle file. Start the client application: ./gradlew bootRun Wait a moment for it to finish. The terminal should end with something like this: ... 2019-02-23 19:29:04.448 INFO 54893 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8082 (http) with context path '' 2019-02-23 19:29:04.453 INFO 54893 --- [ main] c.o.s.S.SpringBootOAuthClientApplication : Started SpringBootOAuthClientApplication in 3.911 seconds (JVM running for 4.403) Test the Resource Server Navigate in your browser of choice to your client app at http://localhost:8082/. Click the Login link. You’ll be directed to the login page: Enter username Andrew and password abcd (from the application.properties file from the authentication server). Click Sign In and you’ll be taken to the super fancy securedPage.html template that should say “Secured Page” and “Andrew”. Great! It works. Now you’re gonna make it even simpler. You can stop both server and client Spring Boot apps. Learn more about how to setup Spring Boot apps with PostgreSQL. Create an OpenID Connect Application Okta is a SaaS (software-as-service) authentication and authorization provider. We provide free accounts to developers so they can develop OIDC apps with no fuss. Head over to the Okta Developer page and sign up for an account. After you’ve verified your email, log in and perform the following steps: Go to Application > Add Application. Select application type Web and click Next. Give the app a name. I named mine “Spring Boot OAuth”. Under Login redirect URIs change the value to http://localhost:8080/login/oauth2/code/okta. The rest of the default values will work. Click Done. Leave the page open and take note of the Client ID and Client Secret. You’ll need them in a moment. Create a New Spring Boot App Back to the Spring Initializr one more time. Create a new project with the following settings: Change project type from Maven to Gradle. Change the Group to com.okta.spring. Change the Artifact to OktaOAuthClient. Add three dependencies: Web, Thymeleaf, and Okta. Click Generate Project. Copy the project and unpack it somewhere. In the build.gradle file, add the following dependency: implementation 'org.thymeleaf.extras:thymeleaf-extras-springsecurity5:3.0.4.RELEASE' Also while you’re there, notice the dependency com.okta.spring:okta-spring-boot-starter:1.1.0. This is the Okta Spring Boot Starter. It’s a handy project that makes integrating Okta with Spring Boot nice and easy. For more info, take a look at the project’s GitHub. Change the src/main/resources/application.properties to application.yml and add the following: server: port: 8080 okta: oauth2: issuer: https://okta.okta.com/oauth2/default client-id: {yourClientId} client-secret: {yourClientSecret} spring: thymeleaf: cache: false Remember when I said you’ll need your ClientID and Client Secret above? Well, the time has come. You need to fill them into the file, as well as your Okta issuer URL. It’s gonna look something like this: dev-123456.okta.com. You can find it under API > Authorization Servers. You also need two similar template files in the src/main/resources/templates directory. The index.htmltemplate file is exactly the same, and can be copied over if you like. The securedPage.html template file is slightly different because of the way the authentication information is returned from Okta as compared to the simple authentication server you built earlier. Create the home template: src/main/resources/templates/index.html: <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Home</title> </head> <body> <h1>Spring Security SSO</h1> <a href="securedPage">Login</a> </body> </html> And the secured template: src/main/resources/templates/securedPage.html: <!DOCTYPE html> <html xmlns:th="http://www.thymeleaf.org"> <head> <meta charset="UTF-8"> <title>Secured Page</title> </head> <body> <h1>Secured Page</h1> <span th:text="${#authentication.principal.attributes.name}">Joe Coder</span> </body> </html> Create a Java class named WebController in the com.okta.spring.SpringBootOAuth package: package com.okta.spring.OktaOAuthClient; import org.springframework.stereotype.Controller; import org.springframework.ui.Model; import org.springframework.web.bind.annotation.RequestMapping; import java.security.Principal; @Controller public class WebController { @RequestMapping("/securedPage") public String securedPage(Model model, Principal principal) { return "securedPage"; } @RequestMapping("/") public String index(Model model, Principal principal) { return "index"; } } This class simply creates two routes, one for the home route and one for the secured route. Again, Spring Boot and Thymeleaf are auto-magicking this to the two template files in src/main/resources/templates. Finally, create another Java class names SecurityConfiguration: package com.okta.spring.OktaOAuthClient; import org.springframework.context.annotation.Configuration; import org.springframework.security.config.annotation.web.builders.HttpSecurity; import org.springframework.security.config.annotation.web.configuration.WebSecurityConfigurerAdapter; @Configuration public class SecurityConfiguration extends WebSecurityConfigurerAdapter { @Override public void configure(HttpSecurity http) throws Exception { http.antMatcher("/**").authorizeRequests() .antMatchers("/").permitAll() .anyRequest().authenticated() .and() .oauth2Login(); } } That’s it! Bam! Run the Okta-OAuth-powered client: ./gradlew bootRun You should see a bunch of output that ends with: ... 2019-02-23 20:09:03.465 INFO 55890 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8080 (http) with context path '' 2019-02-23 20:09:03.470 INFO 55890 --- [ main] c.o.s.O.OktaOAuthClientApplication : Started OktaOAuthClientApplication in 3.285 seconds (JVM running for 3.744) Navigate to http://localhost:8080. Click the Login button. This time, you’ll be directed to the Okta login page. You may need to use an incognito browser or log out of your developer.okta.com dashboard here so that you don’t skip the login page and get directed immediately to the secured endpoint. Log in, and you’ll see the secured page with your name! Learn More About Spring Boot, Spring Security, and OAuth 2.0 So, that’s that. Super easy. In the previous tutorial, you looked at how to use Spring Boot and Spring Security to implement a very basic authentication server and client app. Next, you used Okta to make an even simpler client app with fully functioning SSO and OAuth authentication. You can see the completed code for this tutorial on GitHub.
In this post, we are going to demonstrate Spring Security + OAuth2 for securing REST API endpoints on an example Spring Boot project. Clients and user credentials will be stored in a relational database (example configurations prepared for H2 and PostgreSQL database engines). To do it we will have to: Configure Spring Security + database Create an Authorization Server Create a Resource Server Get an access token and a refresh token Get a secured Resource using an access token Related Tutorial: Set up a Spring Boot application with PostgreSQL To simplify the demonstration, we are going to combine the Authorization Server and Resource Server in the same project. As a grant type, we will use a password (we will use BCrypt to hash our passwords). Before you start you should familiarize yourself with OAuth2 fundamentals. The OAuth 2.0 specification defines a delegation protocol that is useful for conveying authorization decisions across a network of web-enabled applications and APIs. OAuth is used in a wide variety of applications, including providing mechanisms for user authentication. The Significance of Securing REST APIs REST APIs serve as the backbone of modern applications, facilitating data exchange and communication between various components. Whether it's a banking application handling financial transactions, a healthcare platform managing sensitive patient data, or an e-commerce system processing user information, the stakes for securing API endpoints are exceptionally high. Unauthorized access, data breaches, and other security threats not only jeopardize user privacy but also pose significant legal and financial risks to organizations. Securing REST APIs is not merely a best practice, it's a critical necessity in an era where cyber threats continue to evolve in sophistication. Unsecured APIs can be exploited to gain unauthorized access, manipulate data, or disrupt services, leading to severe consequences for businesses and users alike. Why Spring Security and OAuth2? In the complex landscape of web security, Spring Security has emerged as the framework of choice for implementing authentication and authorization in Java applications. Its modular and extensible architecture makes it well-suited for securing REST APIs, providing developers with a powerful toolset to enforce access controls and protect resources. OAuth2, on the other hand, addresses the specific challenges of authorization by introducing a standardized protocol for delegated access. This protocol allows applications to obtain limited access to a user's resources without exposing sensitive credentials. OAuth2's flexibility makes it particularly suitable for scenarios where third-party applications or services need controlled access to protected resources. By combining Spring Security and OAuth2, developers can establish a formidable defense against a variety of security threats. Spring Security's capabilities extend from user authentication to intricate authorization scenarios, while OAuth2 simplifies the process of managing access tokens and permissions, enabling a secure yet user-friendly experience. OAuth Roles OAuth specifies four roles: Resource owner (the User): An entity capable of granting access to a protected resource (for example end-user) Resource server (the API server): The server hosting the protected resources, capable of accepting and responding to protected resource requests using access tokens Client: An application making protected resource requests on behalf of the resource owner and with its authorization Authorization server: The server issuing access tokens to the client after successfully authenticating the resource owner and obtaining authorization [Build an OAuth 2.0 authorization server with Spring Security] Grant Types OAuth 2 provides several "grant types" for different use cases. The grant types defined are: Authorization Code Password Client credentials Implicit The overall flow of a Password Grant: Application Let's consider the database layer and application layer for our example application. Business Data Our main business object is Company: Based on CRUD operations for Company and Department objects, we want to define the following access rules: COMPANY_CREATE COMPANY_READ COMPANY_UPDATE COMPANY_DELETE DEPARTMENT_CREATE DEPARTMENT_READ DEPARTMENT_UPDATE DEPARTMENT_DELETE In addition, we want to create a ROLE_COMPANY_READER role. OAuth2 Client Setup We need to create the following tables in the database (for internal purposes of OAuth2 implementation): OAUTH_CLIENT_DETAILS OAUTH_CLIENT_TOKEN OAUTH_ACCESS_TOKEN OAUTH_REFRESH_TOKEN OAUTH_CODE OAUTH_APPROVALS Let's assume that we want to call a resource server like resource-server-rest-api. For this server, we define two clients called: spring-security-oauth2-read-client (authorized grant types: read) spring-security-oauth2-read-write-client (authorized grant types: read, write) INSERT INTO OAUTH_CLIENT_DETAILS(CLIENT_ID, RESOURCE_IDS, CLIENT_SECRET, SCOPE, AUTHORIZED_GRANT_TYPES, AUTHORITIES, ACCESS_TOKEN_VALIDITY, REFRESH_TOKEN_VALIDITY) VALUES ('spring-security-oauth2-read-client', 'resource-server-rest-api', /*spring-security-oauth2-read-client-password1234*/'$2a$04$WGq2P9egiOYoOFemBRfsiO9qTcyJtNRnPKNBl5tokP7IP.eZn93km', 'read', 'password,authorization_code,refresh_token,implicit', 'USER', 10800, 2592000); INSERT INTO OAUTH_CLIENT_DETAILS(CLIENT_ID, RESOURCE_IDS, CLIENT_SECRET, SCOPE, AUTHORIZED_GRANT_TYPES, AUTHORITIES, ACCESS_TOKEN_VALIDITY, REFRESH_TOKEN_VALIDITY) VALUES ('spring-security-oauth2-read-write-client', 'resource-server-rest-api', /*spring-security-oauth2-read-write-client-password1234*/'$2a$04$soeOR.QFmClXeFIrhJVLWOQxfHjsJLSpWrU1iGxcMGdu.a5hvfY4W', 'read,write', 'password,authorization_code,refresh_token,implicit', 'USER', 10800, 2592000); Note that the password is hashed with BCrypt (4 rounds). Authorities and Users Setup Spring Security comes with two useful interfaces: UserDetails: Provides core user information GrantedAuthority: Represents an authority granted to an Authentication object To store authorization data we will define the following data model: Because we want to come with some pre-loaded data, below is the script that will load all authorities: INSERT INTO AUTHORITY(ID, NAME) VALUES (1, 'COMPANY_CREATE'); INSERT INTO AUTHORITY(ID, NAME) VALUES (2, 'COMPANY_READ'); INSERT INTO AUTHORITY(ID, NAME) VALUES (3, 'COMPANY_UPDATE'); INSERT INTO AUTHORITY(ID, NAME) VALUES (4, 'COMPANY_DELETE'); INSERT INTO AUTHORITY(ID, NAME) VALUES (5, 'DEPARTMENT_CREATE'); INSERT INTO AUTHORITY(ID, NAME) VALUES (6, 'DEPARTMENT_READ'); INSERT INTO AUTHORITY(ID, NAME) VALUES (7, 'DEPARTMENT_UPDATE'); INSERT INTO AUTHORITY(ID, NAME) VALUES (8, 'DEPARTMENT_DELETE'); Here is the script to load all users and assigned authorities: INSERT INTO USER_(ID, USER_NAME, PASSWORD, ACCOUNT_EXPIRED, ACCOUNT_LOCKED, CREDENTIALS_EXPIRED, ENABLED) VALUES (1, 'admin', /*admin1234*/'$2a$08$qvrzQZ7jJ7oy2p/msL4M0.l83Cd0jNsX6AJUitbgRXGzge4j035ha', FALSE, FALSE, FALSE, TRUE); INSERT INTO USER_(ID, USER_NAME, PASSWORD, ACCOUNT_EXPIRED, ACCOUNT_LOCKED, CREDENTIALS_EXPIRED, ENABLED) VALUES (2, 'reader', /*reader1234*/'$2a$08$dwYz8O.qtUXboGosJFsS4u19LHKW7aCQ0LXXuNlRfjjGKwj5NfKSe', FALSE, FALSE, FALSE, TRUE); INSERT INTO USER_(ID, USER_NAME, PASSWORD, ACCOUNT_EXPIRED, ACCOUNT_LOCKED, CREDENTIALS_EXPIRED, ENABLED) VALUES (3, 'modifier', /*modifier1234*/'$2a$08$kPjzxewXRGNRiIuL4FtQH.mhMn7ZAFBYKB3ROz.J24IX8vDAcThsG', FALSE, FALSE, FALSE, TRUE); INSERT INTO USER_(ID, USER_NAME, PASSWORD, ACCOUNT_EXPIRED, ACCOUNT_LOCKED, CREDENTIALS_EXPIRED, ENABLED) VALUES (4, 'reader2', /*reader1234*/'$2a$08$vVXqh6S8TqfHMs1SlNTu/.J25iUCrpGBpyGExA.9yI.IlDRadR6Ea', FALSE, FALSE, FALSE, TRUE); INSERT INTO USERS_AUTHORITIES(USER_ID, AUTHORITY_ID) VALUES (1, 1); INSERT INTO USERS_AUTHORITIES(USER_ID, AUTHORITY_ID) VALUES (1, 2); INSERT INTO USERS_AUTHORITIES(USER_ID, AUTHORITY_ID) VALUES (1, 3); INSERT INTO USERS_AUTHORITIES(USER_ID, AUTHORITY_ID) VALUES (1, 4); INSERT INTO USERS_AUTHORITIES(USER_ID, AUTHORITY_ID) VALUES (1, 5); INSERT INTO USERS_AUTHORITIES(USER_ID, AUTHORITY_ID) VALUES (1, 6); INSERT INTO USERS_AUTHORITIES(USER_ID, AUTHORITY_ID) VALUES (1, 7); INSERT INTO USERS_AUTHORITIES(USER_ID, AUTHORITY_ID) VALUES (1, 8); INSERT INTO USERS_AUTHORITIES(USER_ID, AUTHORITY_ID) VALUES (1, 9); INSERT INTO USERS_AUTHORITIES(USER_ID, AUTHORITY_ID) VALUES (2, 2); INSERT INTO USERS_AUTHORITIES(USER_ID, AUTHORITY_ID) VALUES (2, 6); INSERT INTO USERS_AUTHORITIES(USER_ID, AUTHORITY_ID) VALUES (3, 3); INSERT INTO USERS_AUTHORITIES(USER_ID, AUTHORITY_ID) VALUES (3, 7); INSERT INTO USERS_AUTHORITIES(USER_ID, AUTHORITY_ID) VALUES (4, 9); Note that the password is hashed with BCrypt (8 rounds). Application Layer The test application is developed in Spring Boot + Hibernate + Flyway with an exposed REST API. To demonstrate data company operations, the following endpoints were created: @RestController @RequestMapping("/secured/company") public class CompanyController { @Autowired private CompanyService companyService; @RequestMapping(method = RequestMethod.GET, produces = MediaType.APPLICATION_JSON_VALUE) @ResponseStatus(value = HttpStatus.OK) public @ResponseBody List<Company> getAll() { return companyService.getAll(); } @RequestMapping(value = "/{id}", method = RequestMethod.GET, produces = MediaType.APPLICATION_JSON_VALUE) @ResponseStatus(value = HttpStatus.OK) public @ResponseBody Company get(@PathVariable Long id) { return companyService.get(id); } @RequestMapping(value = "/filter", method = RequestMethod.GET, produces = MediaType.APPLICATION_JSON_VALUE) @ResponseStatus(value = HttpStatus.OK) public @ResponseBody Company get(@RequestParam String name) { return companyService.get(name); } @RequestMapping(method = RequestMethod.POST, produces = MediaType.APPLICATION_JSON_VALUE) @ResponseStatus(value = HttpStatus.OK) public ResponseEntity<?> create(@RequestBody Company company) { companyService.create(company); HttpHeaders headers = new HttpHeaders(); ControllerLinkBuilder linkBuilder = linkTo(methodOn(CompanyController.class).get(company.getId())); headers.setLocation(linkBuilder.toUri()); return new ResponseEntity<>(headers, HttpStatus.CREATED); } @RequestMapping(method = RequestMethod.PUT, produces = MediaType.APPLICATION_JSON_VALUE) @ResponseStatus(value = HttpStatus.OK) public void update(@RequestBody Company company) { companyService.update(company); } @RequestMapping(value = "/{id}", method = RequestMethod.DELETE, produces = MediaType.APPLICATION_JSON_VALUE) @ResponseStatus(value = HttpStatus.OK) public void delete(@PathVariable Long id) { companyService.delete(id); } } PasswordEncoders Since we are going to use different encryptions for OAuth2 client and user, we will define separate password encoders for encryption: OAuth2 client password – BCrypt (4 rounds) User password - BCrypt (8 rounds) @Configuration public class Encoders { @Bean public PasswordEncoder oauthClientPasswordEncoder() { return new BCryptPasswordEncoder(4); } @Bean public PasswordEncoder userPasswordEncoder() { return new BCryptPasswordEncoder(8); Spring Security Configuration Provide UserDetailsService Because we want to get users and authorities from the database, we need to tell Spring Security how to get this data. To do it we have to provide an implementation of the UserDetailsService interface: @Service public class UserDetailsServiceImpl implements UserDetailsService { @Autowired private UserRepository userRepository; @Override @Transactional(readOnly = true) public UserDetails loadUserByUsername(String username) throws UsernameNotFoundException { User user = userRepository.findByUsername(username); if (user != null) { return user; } throw new UsernameNotFoundException(username); } } To separate the service and repository layers we will create UserRepository with JPA Repository: @Repository public interface UserRepository extends JpaRepository<User, Long> { @Query("SELECT DISTINCT user FROM User user " + "INNER JOIN FETCH user.authorities AS authorities " + "WHERE user.username = :username") User findByUsername(@Param("username") String username); } Learn more about how to upgrade to Spring Boot 3.0 for Spring Data JPA Setup Spring Security The @EnableWebSecurity annotation and WebSecurityConfigurerAdapter work together to provide security to the application. The @Order annotation is used to specify which WebSecurityConfigurerAdapter should be considered first. @Configuration @EnableWebSecurity @Order(SecurityProperties.ACCESS_OVERRIDE_ORDER) @Import(Encoders.class) public class ServerSecurityConfig extends WebSecurityConfigurerAdapter { @Autowired private UserDetailsService userDetailsService; @Autowired private PasswordEncoder userPasswordEncoder; @Override @Bean public AuthenticationManager authenticationManagerBean() throws Exception { return super.authenticationManagerBean(); } @Override protected void configure(AuthenticationManagerBuilder auth) throws Exception { auth.userDetailsService(userDetailsService).passwordEncoder(userPasswordEncoder); } } OAuth2 Configuration First of all, we have to implement the following components: Authorization Server Resource Server Authorization Server The authorization server is responsible for the verification of user identity and providing the tokens. Spring Security handles the Authentication and Spring Security OAuth2 handles the Authorization. To configure and enable the OAuth 2.0 Authorization Server we have to use @EnableAuthorizationServer annotation. @Configuration @EnableAuthorizationServer @EnableGlobalMethodSecurity(prePostEnabled = true) @Import(ServerSecurityConfig.class) public class AuthServerOAuth2Config extends AuthorizationServerConfigurerAdapter { @Autowired @Qualifier("dataSource") private DataSource dataSource; @Autowired private AuthenticationManager authenticationManager; @Autowired private UserDetailsService userDetailsService; @Autowired private PasswordEncoder oauthClientPasswordEncoder; @Bean public TokenStore tokenStore() { return new JdbcTokenStore(dataSource); } @Bean public OAuth2AccessDeniedHandler oauthAccessDeniedHandler() { return new OAuth2AccessDeniedHandler(); } @Override public void configure(AuthorizationServerSecurityConfigurer oauthServer) { oauthServer.tokenKeyAccess("permitAll()").checkTokenAccess("isAuthenticated()").passwordEncoder(oauthClientPasswordEncoder); } @Override public void configure(ClientDetailsServiceConfigurer clients) throws Exception { clients.jdbc(dataSource); } @Override public void configure(AuthorizationServerEndpointsConfigurer endpoints) { endpoints.tokenStore(tokenStore()).authenticationManager(authenticationManager).userDetailsService(userDetailsService); } } Some important points to note - we: Defined the TokenStore bean to let Spring know to use the database for token operations Overrode the configure methods to use the custom UserDetailsService implementation, AuthenticationManager bean, and OAuth2 client’s password encoder Defined handler bean for authentication issues Enabled two endpoints for checking tokens (/oauth/check_token and /oauth/token_key) by overriding the configure (AuthorizationServerSecurityConfigureroauthServer) method Resource Server A Resource Server serves resources that are protected by the OAuth2 token. Spring OAuth2 provides an authentication filter that handles protection. The @EnableResourceServer annotation enables a Spring Security filter that authenticates requests via an incoming OAuth2 token. @Configuration @EnableResourceServer public class ResourceServerConfiguration extends ResourceServerConfigurerAdapter { private static final String RESOURCE_ID = "resource-server-rest-api"; private static final String SECURED_READ_SCOPE = "#oauth2.hasScope('read')"; private static final String SECURED_WRITE_SCOPE = "#oauth2.hasScope('write')"; private static final String SECURED_PATTERN = "/secured/**"; @Override public void configure(ResourceServerSecurityConfigurer resources) { resources.resourceId(RESOURCE_ID); } @Override public void configure(HttpSecurity http) throws Exception { http.requestMatchers() .antMatchers(SECURED_PATTERN).and().authorizeRequests() .antMatchers(HttpMethod.POST, SECURED_PATTERN).access(SECURED_WRITE_SCOPE) .anyRequest().access(SECURED_READ_SCOPE); } } The configure(HttpSecurity http) method configures the access rules and request matchers (path) for protected resources using the HttpSecurity class. We secure the URL paths /secured/*. It’s worth noting that to invoke any POST method request, the write scope is needed. Let's check if our authentication endpoint is working – invoke: curl -X POST \ http://localhost:8080/oauth/token \ -H 'authorization: Basic c3ByaW5nLXNlY3VyaXR5LW9hdXRoMi1yZWFkLXdyaXRlLWNsaWVudDpzcHJpbmctc2VjdXJpdHktb2F1dGgyLXJlYWQtd3JpdGUtY2xpZW50LXBhc3N3b3JkMTIzNA==' \ -F grant_type=password \ -F username=admin \ -F password=admin1234 \ -F client_id=spring-security-oauth2-read-write-client Below are screenshots from Postman: As well as: You should get a response similar to the following: { "access_token": "e6631caa-bcf9-433c-8e54-3511fa55816d", "token_type": "bearer", "refresh_token": "015fb7cf-d09e-46ef-a686-54330229ba53", "expires_in": 9472, "scope": "read write" } Access Rules Configuration We decided to secure access to the Company and Department objects on the service layer. We have to use the @PreAuthorize annotation. @Service public class CompanyServiceImpl implements CompanyService { @Autowired private CompanyRepository companyRepository; @Override @Transactional(readOnly = true) @PreAuthorize("hasAuthority('COMPANY_READ') and hasAuthority('DEPARTMENT_READ')") public Company get(Long id) { return companyRepository.find(id); } @Override @Transactional(readOnly = true) @PreAuthorize("hasAuthority('COMPANY_READ') and hasAuthority('DEPARTMENT_READ')") public Company get(String name) { return companyRepository.find(name); } @Override @Transactional(readOnly = true) @PreAuthorize("hasRole('COMPANY_READER')") public List<Company> getAll() { return companyRepository.findAll(); } @Override @Transactional @PreAuthorize("hasAuthority('COMPANY_CREATE')") public void create(Company company) { companyRepository.create(company); } @Override @Transactional @PreAuthorize("hasAuthority('COMPANY_UPDATE')") public Company update(Company company) { return companyRepository.update(company); } @Override @Transactional @PreAuthorize("hasAuthority('COMPANY_DELETE')") public void delete(Long id) { companyRepository.delete(id); } @Override @Transactional @PreAuthorize("hasAuthority('COMPANY_DELETE')") public void delete(Company company) { companyRepository.delete(company); } } Let’s test if our endpoint is working fine: curl -X GET \ http://localhost:8080/secured/company/ \ -H 'authorization: Bearer e6631caa-bcf9-433c-8e54-3511fa55816d' Let's see what will happen if we authorize with it spring-security-oauth2-read-client – this client has only the read scope defined. curl -X POST \ http://localhost:8080/oauth/token \ -H 'authorization: Basic c3ByaW5nLXNlY3VyaXR5LW9hdXRoMi1yZWFkLWNsaWVudDpzcHJpbmctc2VjdXJpdHktb2F1dGgyLXJlYWQtY2xpZW50LXBhc3N3b3JkMTIzNA==' \ -F grant_type=password \ -F username=admin \ -F password=admin1234 \ -F client_id=spring-security-oauth2-read-client Then for the below request: http://localhost:8080/secured/company \ -H 'authorization: Bearer f789c758-81a0-4754-8a4d-cbf6eea69222' \ -H 'content-type: application/json' \ -d '{ "name": "TestCompany", "departments": null, "cars": null }' We are getting the following error: { "error": "insufficient_scope", "error_description": "Insufficient scope for this resource", "scope": "write" } Summary In this blog post, we showed OAuth2 authentication with Spring. Access rights were defined straightforwardly — by establishing a direct connection between User and Authorities. To enhance this example, we can add an additional entity — Role — to improve the structure of the access rights. The source code for the above listings can be found in this GitHub project.
What Is an Authorization Code Grant? According to the OAuth 2.0 specification, an authorization code grant flow is a two-step process mainly used by confidential clients (a web server or secured application that can promise the security of credentials). OAuth 2.0 is an open standard authorization protocol that enables secure and delegated access to resources on the web. It allows users to grant limited access to their resources (such as profiles or data) to third-party applications without sharing their credentials. OAuth 2.0 is widely used for authentication and authorization in modern web and mobile applications. Spring Authorization Server is an OAuth 2.0 and OpenID Connect (OIDC) compliant authorization server built on the Spring framework designed to simplify the implementation of secure and standardized authorization protocols. Introduced as part of the Spring Security ecosystem, the Spring Authorization Server facilitates the centralized management of authorization policies and access control for distributed applications. It supports various grant types, enabling different authentication flows such as authorization codes, client credentials, and refresh token grants. The server's modular architecture and integration with Spring Boot make it easy to configure and customize, providing developers with a flexible and scalable solution for managing authentication and authorization in their Spring-based applications. DZone’s previously covered OAuth 2.0 specifications and how we can implement OAuth 2.0 client credentials grant flow working with Spring's authorization server. In this article, we're going to see how we can implement an authorization code grant flow and get it working with Spring Security. In the first step, we'll request the authorization endpoint to get an authorization code from the authorization server and then use it to get an access token from the authorization server at the token endpoint. Getting Started Please make sure that you have all these dependencies in your pom.xml. XML x 1 <dependencies> 2 <dependency> 3 <groupId>org.springframework.security</groupId> 4 <artifactId>spring-security-jwt</artifactId> 5 <version>1.1.1.RELEASE</version> 6 </dependency> 7 <dependency> 8 <groupId>org.springframework.security.oauth</groupId> 9 <artifactId>spring-security-oauth2</artifactId> 10 <version>2.5.0.RELEASE</version> 11 </dependency> 12 <dependency> 13 <groupId>org.springframework.security.oauth.boot</groupId> 14 <artifactId>spring-security-oauth2-autoconfigure</artifactId> 15 <version>2.4.0</version> 16 </dependency> 17 </dependencies> spring-security-oauth2 has all core dependencies required for OAuth, and spring-security-jwt is for jwt support in oauth2. The auto-configure dependency is required for auto-configuration, and if you don't want to include this one, you will have to add some jaxb dependencies to get it working. Enabling Authorization Server Support To enable the support for the authorization server, you would need to add an annotation on top of @SpringBootApplication: @EnableAuthorizationServe. Java xxxxxxxxxx 1 1 @EnableAuthorizationServer 2 @SpringBootApplication 3 public class SpringAuthorizationServerApplication { 4 5 public static void main(String[] args) { 6 SpringApplication.run(SpringAuthorizationServerApplication.class, args); 7 } 8 9 } Overriding Authorization Server's Default Configuration To override the default configuration of Spring's Authorization Server, we will need to extend our configuration class with AuthorizationServerConfigurerAdapter. To reduce the code and effort for demonstration purposes, we will be using in-memory client configuration. The configuration should look similar to what I have here. Java xxxxxxxxxx 1 25 1 @SuppressWarnings("deprecation") 2 @Configuration 3 public class AuthServerConfig extends AuthorizationServerConfigurerAdapter { 4 5 @Bean("passwordEncoder") 6 BCryptPasswordEncoder passwordEncoder() { 7 return new BCryptPasswordEncoder(); 8 } 9 10 @Override 11 public void configure(ClientDetailsServiceConfigurer clients) throws Exception { 12 clients 13 .inMemory() 14 .withClient("clientId") 15 .secret(passwordEncoder().encode("client-secret")) 16 .scopes("read", "write") 17 .authorizedGrantTypes("authorization_code", "refresh_token") 18 .redirectUris("http://localhost:8081/oauth/login/client-app") 19 .autoApprove(true); 20 } 21 22 @Bean 23 JwtTokenStore getAccessTokenConverter() { 24 return new JwtTokenStore(JwtTokenEnhancer.getInstance()); 25 } 26 } Please make sure that you've marked your class @Configuration so that it can be picked by Spring Security OAuth2. As discussed, authorization code grant flow is for confidential clients; one can guarantee the security of credentials. So here, we have used the BCryptPassword encoder to encode our credentials. That's the reason I've defined a bean of BCrypt. To configure client details, we will need to override a configure method that contains ClientDetailsServiceConfigurer, and using in-memory configuration, we can add the required details. The token store bean that you see I used to customize the jwt token. All you have to do is extend your token converter class from JwtAccessTokenConverter and define a bean in the Authorization Server config to tell the Auth Server to use your configuration for jwt. Configuring Spring Security Till now, all we did was for the Authorization Server. Let's add some Spring Security configurations to add users that we will be authenticating. Extend your class from the WebSecurityConfigurer adapter like so: Java x 1 @EnableWebSecurity 2 @Configuration 3 public class WebSecurityConfig extends WebSecurityConfigurerAdapter { 4 5 @Autowired 6 @Qualifier("passwordEncoder") 7 BCryptPasswordEncoder passwordEncoder; 8 9 @Override 10 protected void configure(AuthenticationManagerBuilder auth) throws Exception { 11 auth.inMemoryAuthentication() 12 .withUser("username") 13 .password(passwordEncoder.encode("password")) 14 .roles("USER"); 15 } 16 17 @Override 18 protected void configure(HttpSecurity http) throws Exception { 19 20 http.authorizeRequests() 21 .anyRequest() 22 .authenticated().and() 23 .formLogin().permitAll() 24 .and() 25 .logout().permitAll(); 26 27 http.csrf().disable(); 28 } 29 } It's a normal Spring Security configuration for form login, and we've used in-memory user storage. With that being finished, we're good to start testing the application. Getting Authorization Code To get the authorization code, we need to request the server and redirect you to the auth server's login page if you're not authenticated. To get the code, we hit /oauth/authorize with a few required params. response_type : Must set to code (required) client_id : clientId that we set up in the auth server (required) state: Some random value to maintain state between server and client (optional) redirect_uri: Optional localhost:8080/oauth/authorize?response_type=code&client_id=clientId&state=8781487s1: The link will redirect you to a login page and after successful login, it will redirect you to the redirect link that we had set up in the auth server with some params (http://localhost:8081/oauth/login/client-app?code=EXUdZm&state=8781487s1). The core value that we see in the response parameter is the authorization code that we will use later to access the access token and refresh token from the auth server. Learn more tips for OAuth2 token validation. Access Token Exchange Authorization Code Example To get an access token and refresh token, we will need to make a post request with clientId and client-secret in basic auth header with a few params. Here are some code samples of a token request and response: HTTP x 1 POST /oauth/token?grant_type=authorization_code& code=X2KnGB& client_Id=clientId& state=8781487s1 HTTP/1.1 2 Host: localhost:8080 3 Authorization: Basic Y2xpZW50SWQ6Y2xpZW50LXNlY3JldA== 4 cache-control: no-cache Once, you make this post request you will get a response similar to this: JSON xxxxxxxxxx 1 1 { 2 "access_token": "U9c7fB35jHh6vW-WsBd-VdfLcOs", 3 "token_type": "bearer", 4 "refresh_token": "hGMk8bgNwO7YHrKaCZkc380BB68", 5 "expires_in": 43199, 6 "scope": "read write" 7 } That's all! You can use this token to access protected resources. Since I've signed this token using RSA private and public keys, that's the reason it's different from a normal jwt token. Further review how to secure Spring Boot microservices with JSON web tokens. I would like you to see how I implemented an access token converter. Java x 1 public class JwtTokenEnhancer extends JwtAccessTokenConverter { 2 3 private static final String PRIVATE_KEY = "Your private rsa key"; 4 private static final String PUBLIC_KEY = "Your public rsa key"; 5 6 public JwtTokenEnhancer(String publicKey, String privateKey) { 7 super.setSigningKey(privateKey); 8 super.setVerifierKey(publicKey); 9 } 10 11 public static JwtAccessTokenConverter getInstance() { 12 return new JwtTokenEnhancer(JwtTokenEnhancer.PUBLIC_KEY, JwtTokenEnhancer.PRIVATE_KEY); 13 } 14 15 16 } 17 Remember, it's wrapped into a token store bean that we've defined in auth server configurations. With that being said, thank you so much for taking the time to read this post, and I will be coming up with some Spring Security 5 OAuth2.0 articles. This project is available on GitHub.
Authentication Access to a secure domain must go through the process of authentication. Authentication verifies the identity of a person who requests access to a particular resource, based on which authorization is granted. For example, a common way to authenticate a user is through a username and password. Spring Security Spring is one of the most popular open-source frameworks of Java that evolved into a huge ecosystem that addresses multiple facets of enterprise application development. Spring Security is one of them which focuses on authentication and access-control mechanisms. It has built-in support for authenticating users. It is the de facto standard for securing Spring-based applications. Spring Security provides support for both authentication and authorization. While authentication verifies the identity of the user, authorization lays down the rules of access to the resource. Although the default authentication and authorization support is fine in most cases, the real power of Spring Security lies in its customization. It can be extended according to client requirements. The authentication support applies to both servlet and WebFlux environments. Spring Security can be configured to protect against an attack like Session Fixation, Clickjacking, Cross-Site Request Forgery(CSRF), Brute Forcing, Man in the Middle(MITM), Cross-Site Scripting (XSS), etc. It can be well integrated with Servlet API as well as Spring Web MVC. Why Spring Security? The ecosystem of Spring provides a comprehensive programming and configuration model for developing small to a large enterprise application. Security is an indispensable part of any business application. It must be secure, and what better framework can one choose when Spring itself provides the necessary APIs? Related: Secure Spring REST with Spring Security and Oauth. By using Spring Security, we are actually delegating the responsibility of determining the architecture and core security features to a team of experts. Spring Security has evolved since its inception and is in continuous development and has stood the test of time. So, when building a Spring application it is a good idea to include the Spring Security framework. They not only integrate well but are also reliable. This article will delve into the technical capabilities of Spring Security, specifically authentication. To find the complete code for this article, check out this GitHub repository. Read DZone’s guide on implementing OAuth 2.0 with Spring Boot. The following diagram shows the fundamental process Spring Security uses to address this core security requirement. The figure is generic and can be used to explain all the various authentication methods that the framework supports: Spring Security has a series of servlet filters (a filter chain). When a request reaches the server, it is intercepted by this series of filters (Step 1 in the preceding diagram). In the reactive world, with the new Spring WebFlux web application framework, filters are written quite differently from traditional filters, such as those used in the Spring MVC web application framework. Having said that, the fundamental mechanism remains the same for both. The servlet filter code execution in the filter chain keeps skipping until the right filter is reached. Once it reaches the right authentication filter based on the authentication mechanism used, it extracts the supplied credentials — most commonly a username and password — from the caller. Using the supplied values (here, you have a username and password), the filter UsernamePasswordAuthenticationFilter creates an Authentication object. In the preceding diagram, the UsernamePasswordAuthenticationToken is created with the username and password supplied in Step 2. The Authentication object created in Step 2 is then used to call the authenticate method in the AuthenticationManager interface: public interface AuthenticationManager { Authentication authenticate(Authentication authentication) throwsAuthenticationException; } The actual implementation is provided by the ProviderManager, which has a list of the configured AuthenticationProvider. public interface AuthenticationProvider { Authentication authenticate(Authentication authentication) throwsAuthenticationException; boolean supports(Class<?> authentication); } The request passes through various providers and, in due course, tries to authenticate the request. There are a number of AuthenticationProvider interfaces as part of Spring Security. In the diagram above, AuthenticationProvider requires user details (some providers require this, but some don’t), which are provided in UserDetailsService: public interface UserDetailsService { UserDetailsloadUserByUsername(String username) throws UsernameNotFoundException; } UserDetailsService retrieves UserDetails and implements the user interface using the supplied username. If all goes well, Spring Security creates a fully populated authentication object (authenticate: true, granted authority list, and username), which will contain various necessary details. The authentication object is stored in the SecurityContext object by the filter for future use. An authentication object with authenticated=true if Spring Security can validate the supplied user credentials An AuthenticationException if Spring Security finds that the supplied user credentials are invalid null if Spring Security cannot decide whether it is true or false (confused state) Setting Up the Authentication Manager There are a number of built-inAuthenticationManagermethods in Spring Security that can be easily used in your application. Spring Security also has a number of helper classes, which you can set up using AuthenticationManager. One helper class is theAuthenticationManagerBuilder. Using this class, it's quite easy to set up the UserDetailsService against a database, in memory, in LDAP, and so on. If the need arises, you could also have your own custom UserDetailsService (maybe a custom single sign-on solution is already there in your organization). You can make anAuthenticationManager global, so it will be accessible by your entire application. It will be available for method security and other WebSecurityConfigurerAdapter instances. WebSecurityConfigurerAdapter is a class that is extended by your Spring configuration file, making it quite easy to bring Spring Security into your Spring application. This is how you set up a global AuthenticationManager using the @Autowired annotation: @Configuration @EnableWebSecurity public class SpringSecurityConfig extends WebSecurityConfigurerAdapter { @Autowired public void confGlobalAuthManager(AuthenticationManagerBuilderauth) throws Exception { auth .inMemoryAuthentication() .withUser("admin").password("admin@password").roles("ROLE_ADMIN"); } } You can also create local the AuthenticationManager, which is only available for this particular WebSecurityConfigurerAdapter by overriding the configure method, as shown in the following code: @Configuration @EnableWebSecurity public class SpringSecurityConfig extends WebSecurityConfigurerAdapter { @Override protected void configure(AuthenticationManagerBuilderauth) throws Exception { auth .inMemoryAuthentication() .withUser("admin").password("admin@password").roles("ROLE_ADMIN"); } Another option is to expose the AuthenticationManager bean by overriding the authenticationManagerBean method: @Override public AuthenticationManagerauthenticationManagerBean() throws Exception { return super.authenticationManagerBean(); } You can also expose various AuthenticationManager, AuthenticationProvider, or the UserDetailsService as beans, which will override the default ones. The preceding code example has used AuthenticationManagerBuilder to configure in-memory authentication. AuthenticationProvider AuthenticationProvider provides a mechanism for getting the user details with which authentication can be performed. Spring Security provides a number of AuthenticationProvider implementations, as shown in the following diagram: Custom AuthenticationProvider You can also write custom supports like the AuthenticationProvider by implementing the AuthenticationProvider interface. You have to implement two methods, namely, authenticate (authentication) and Class< ? >aClass: @Component public class CustomAuthenticationProvider implements AuthenticationProvider { @Override public Authentication authenticate(Authentication authentication) throws AuthenticationException { String username = authentication.getName(); String password = authentication.getCredentials().toString(); if ("user".equals(username) && "password".equals(password)) { return new UsernamePasswordAuthenticationToken(username, password, Collections.emptyList()); } else { throw new BadCredentialsException("Authentication failed"); } } @Override public boolean supports(Class<?>aClass) { return aClass.equals(UsernamePasswordAuthenticationToken.class); } } On the GitHub page, navigate to the jetty-in-memory-basic-custom-authentication project to see the full source code of this class. Multiple AuthenticationProvider Spring Security allows you to declare multiple AuthenticationProvider implementations in your application. They are executed according to the order in which they are declared in the configuration. The jetty-in-memory-basic-custom-authentication project is modified further, and you have used the newly created CustomAuthenticationProvider as an AuthenticationProvider (Order 1) and the existing as your second AuthenticationProvider (Order 2) — MemoryAuthentication: @EnableWebSecurity @ComponentScan(basePackageClasses = CustomAuthenticationProvider.class) public class SpringSecurityConfig extends WebSecurityConfigurerAdapter { @Autowired CustomAuthenticationProvidercustomAuthenticationProvider; @Override protected void configure(HttpSecurity http) throws Exception { http.httpBasic() .and() .authorizeRequests() .antMatchers("/**") .authenticated(); // Use Basic authentication } @Override protected void configure(AuthenticationManagerBuilderauth) throws Exception { // Custom authentication provider - Order 1 auth.authenticationProvider(customAuthenticationProvider); // Built-in authentication provider - Order 2 auth.inMemoryAuthentication() .withUser("admin") .password("{noop}admin@password") //{noop} makes sure that the password encoder doesn't do anything .roles("ADMIN") // Role of the user .and() .withUser("user") .password("{noop}user@password") .credentialsExpired(true) .accountExpired(true) .accountLocked(true) .roles("USER"); } } Whenever the authenticate method executes without error, the controls return, and, thereafter, the configured AuthenticationProvider doesn’t get executed.
Apostolos Giannakidis
Product Security,
Microsoft
Samir Behara
Senior Cloud Infrastructure Architect,
AWS
Boris Zaikin
Lead Solution Architect,
CloudAstro GmBH