DZone Research Report: A look at our developer audience, their tech stacks, and topics and tools they're exploring.
Getting Started With Large Language Models: A guide for both novices and seasoned practitioners to unlock the power of language models.
In our Culture and Methodologies category, dive into Agile, career development, team management, and methodologies such as Waterfall, Lean, and Kanban. Whether you're looking for tips on how to integrate Scrum theory into your team's Agile practices or you need help prepping for your next interview, our resources can help set you up for success.
The Agile methodology is a project management approach that breaks larger projects into several phases. It is a process of planning, executing, and evaluating with stakeholders. Our resources provide information on processes and tools, documentation, customer collaboration, and adjustments to make when planning meetings.
There are several paths to starting a career in software development, including the more non-traditional routes that are now more accessible than ever. Whether you're interested in front-end, back-end, or full-stack development, we offer more than 10,000 resources that can help you grow your current career or *develop* a new one.
Agile, Waterfall, and Lean are just a few of the project-centric methodologies for software development that you'll find in this Zone. Whether your team is focused on goals like achieving greater speed, having well-defined project scopes, or using fewer resources, the approach you adopt will offer clear guidelines to help structure your team's work. In this Zone, you'll find resources on user stories, implementation examples, and more to help you decide which methodology is the best fit and apply it in your development practices.
Development team management involves a combination of technical leadership, project management, and the ability to grow and nurture a team. These skills have never been more important, especially with the rise of remote work both across industries and around the world. The ability to delegate decision-making is key to team engagement. Review our inventory of tutorials, interviews, and first-hand accounts of improving the team dynamic.
Kubernetes in the Enterprise
In 2022, Kubernetes has become a central component for containerized applications. And it is nowhere near its peak. In fact, based on our research, 94 percent of survey respondents believe that Kubernetes will be a bigger part of their system design over the next two to three years. With the expectations of Kubernetes becoming more entrenched into systems, what do the adoption and deployment methods look like compared to previous years?DZone's Kubernetes in the Enterprise Trend Report provides insights into how developers are leveraging Kubernetes in their organizations. It focuses on the evolution of Kubernetes beyond container orchestration, advancements in Kubernetes observability, Kubernetes in AI and ML, and more. Our goal for this Trend Report is to help inspire developers to leverage Kubernetes in their own organizations.
Top 5 In-Demand Tech Skills for 2024: A Guide for Career Advancement
Artificial intelligence (AI) is one of the twenty-first century's most exciting and rapidly developing fields. Artificial intelligence has the potential to transform a variety of industries, including education, healthcare, retail, e-commerce, public relations, small businesses, recruitment, services, and manufacturing. AI can also improve the quality of life for millions of people worldwide by solving complex problems, increasing efficiency, and developing novel solutions. But how do you become an AI expert and enter this exciting field? What skills, education, and career paths should you pursue? In this blog post, we will answer these questions and provide you with a detailed guide on how to become an AI expert. What Is an AI Expert? An AI expert is a professional who understands the fundamentals, methods, and applications of artificial intelligence. An AI expert can create, develop, and deploy intelligent systems capable of performing tasks that would normally require human intelligence, such as natural language processing, computer vision, speech recognition, machine learning, and robotics. An AI expert can work in a variety of fields, including research, engineering, consulting, and teaching. An AI expert can also focus on a specific subfield of AI, such as machine learning, deep learning, computer vision, natural language processing, or robotics. What Are the Skills and Education Required To Become an AI Expert? To become an AI expert, you must have a solid foundation in mathematics, statistics, computer science, and programming. You also need a solid understanding of AI fundamentals such as algorithms, data structures, data analysis, machine learning, and deep learning. Depending on your desired career path, you may also need a bachelor's, master's, or doctoral degree in AI, computer science, or a related subject. Alternatively, you can take online courses, certifications, or boot camps to learn the fundamental skills and concepts of artificial intelligence. Coursera, Udemy, edX, and Udacity are some of the most popular online platforms for AI courses. Some popular AI certifications include the IBM AI Engineering Professional Certificate, Google TensorFlow Developer Certificate, and Microsoft Azure AI Engineer Associate. What Are the Career Paths and Opportunities for AI Experts? There are numerous career paths and opportunities for AI experts across industries and sectors. Some of the most popular and lucrative AI jobs are: AI engineer: An AI engineer is a professional who uses AI and machine learning techniques to create applications and systems that improve organizational efficiency. An AI engineer creates the tools, systems, and processes that allow AI to be applied to real-world problems. An AI engineer may also work with data scientists, machine learning engineers, and other AI professionals to develop and implement AI solutions. The average salary for an AI engineer is $113,000. Machine learning engineer: A machine learning engineer is a professional who conducts research, builds, and designs the artificial intelligence (AI) that powers machine learning. A machine learning engineer maintains and improves existing AI systems while also conducting experiments and tests to assess and optimize their performance. A machine learning engineer collaborates closely with data scientists and AI engineers to create machine learning models and algorithms. The average salary for a machine learning engineer is $123,000. Data engineer: A data engineer is a professional who creates systems for collecting, managing, and converting raw data into information that data scientists, business analysts, and other data professionals can interpret. A data engineer makes data accessible and reliable, allowing organizations to evaluate and improve their performance. A data engineer works with a variety of data sources, including databases, APIs, web scraping, and cloud services. The average salary for a data engineer is $104,000. Robotics engineer: A robotics engineer is a professional who creates robotic applications for a variety of industries, including automobiles, manufacturing, defense, and medicine. A robotics engineer creates, tests, and implements new products or prototypes that incorporate robotic technology. A robotics engineer also works with a wide range of hardware and software components, including sensors, actuators, controllers, and programming languages. The average salary for a robotics engineer is $96,000. AI researcher: An AI researcher is a professional who conducts scientific research into the theory and application of artificial intelligence. An AI researcher investigates new ideas, methods, and applications of AI and publishes their findings in academic journals, conferences, and books. An AI researcher collaborates with other researchers, academics, and industry partners to further the field of AI. The average annual salary for an AI researcher is $119,000. AI consultant: An AI consultant is a professional who offers expert advice and guidance on the use and implementation of artificial intelligence to a variety of clients and organizations. An AI consultant analyzes the client's needs and goals and recommends the best AI solutions to help them achieve their goals. An AI consultant also assists clients with the planning, execution, and evaluation of their AI projects. The average annual salary for an AI consultant is $102,000. How To Become an AI Expert: A Summary Becoming an AI expert is a rewarding and challenging career path that can lead to numerous opportunities and possibilities for you. To become an AI expert, you must have a solid background in mathematics, statistics, computer science, and programming. You should also have a solid understanding of the fundamentals and applications of artificial intelligence. You can learn these skills and knowledge through formal education, online courses, certifications, or boot camps. There are numerous career paths and opportunities for AI experts across industries and sectors. AI engineer, machine learning engineer, data engineer, robotics engineer, AI researcher, and AI consultant are among the most sought-after and lucrative AI positions. You can choose a career path that aligns with your interests, skills, and goals. Suppose you are interested in artificial intelligence and want to become an AI expert. In that case, you can begin your journey today by enrolling in one of the many online courses or certifications that will teach you the fundamental skills and concepts of AI. You can also look into the various resources and communities available to help you learn more about AI and keep up with the latest trends and developments in the field. We hope this blog post provided you with a comprehensive guide on how to become an AI expert. If you have any questions or comments, please post them below. We'd love to hear from you.
I knew a Chief Software Architect from a major financial organization who was an anomaly: he had never developed software professionally. His undergraduate degree is in Accounting and Finance, and his most hands-on technology role was as a DBA. [His LinkedIn profile lists an early Programmer role, though he insisted he didn’t.] Even so, he was well-respected within his organization for his thought leadership and solutions, but nevertheless, it seemed an unusual career path. Since I last worked with him, he has moved into C-level roles at other organizations, confirming his abilities as a technology leader. Then I thought of others I have worked with who are non-technical but positioned to impact technical direction and realized their lack of understanding impacted (and continues to impact) the quality of the software solutions we, as engineers, are expected to deliver. Chief Non-Technical Officer This CTO has been with her/his company for many years in many roles: Director of Support, Chief Strategy Officer, Chief Cultural Officer, and Chief Technical Officer. S/he does not deny that s/he is not a strong technologist – and at times a badge of honor – yet confidently states decisions and direction that they become a fait accompli: alternatives that challenge her/his understanding are not often well received. At times, her/his inner circle helps to form a more nuanced understanding, but only to a point: overcoming her/his existing preconceived notions is difficult, and blatant opposition results in being sidelined from future discussions. By no means is s/he a total technical novice, but fundamental change requires extensive effort and time. Her/his oft-repeated mantra went something like this: Don’t tell me you’re refactoring; refactoring brings no value to our customers. Harking back to her/his strategy days, where feature-feature-feature is the overwhelming driver, this mantra confirmed her/his denial or lack of understanding of the current state of the product. The growing and maturing customer base made clear that areas of the product needed love and attention, but proposed efforts to address them were not prioritized because – in her/his view of the world – there was no visible benefit to their customers, at least when focused on customers asking for new or extended features. The real technologists of the company understood the potential benefits to both the customer and company: performance and scaling improvements, reduced cloud costs, faster deployments, fewer outages, faster feature delivery, reduced technology stack, and consistent and intuitive user experience. Regardless of potential benefits, nothing called out as refactoring would survive planning. The problems continued to grow, and the problems continued to be ignored. Sigh. Product To be clear, I have no interest in becoming a product owner: the wide-ranging responsibilities require a breadth of knowledge and experience not often found in a single person, while their many stakeholders – both internal and external – have contradictory goals and agendas that need to be balanced. I view it as a political role, finding compromises that please (appease) most, with no one getting everything s/he desired. This role is not for the weak and timid. Once we accept that product owners are unlikely to have the background or experiences necessary to handle all responsibilities, we can then understand why the focus is on those responsibilities understood or deemed important by their leaders. Outside of organizations offering technical solutions, product owners often have a stronger business understanding than technology understanding based on their work experience. Perhaps not surprisingly, the product is defined by business expectations more so than technical requirements: future features and functionality are defined by understanding industry trends, reviewing customer feedback, interpreting — sales and usage analytics, defining the user experience, etc. In essence, the product owner is an overclocked business analyst. Real-World Example A particular product manager focused only on rapidly releasing new features regardless of technical stability. Over time, the issues rose to the point where outages – not processing failures, actual outages – occurred daily and could no longer be ignored. She continued to view the work as unnecessary and not beneficial to the product, resulting in this exchange during quarterly planning: The result is product owners often eschew – whenever possible – technology and technical viability aspects of the product, reducing the impact of technology during product planning. Instead of top-down planning, individual engineers attempt to push technical issues bottom-up, which is very difficult and often unsuccessful. Organizations require a strong engineering discipline and culture to offset the business focus of product owners, but it remains a frustrating challenge. [Of course, production technology issues do arise that demand immediate attention, but the resulting work is stressful, particularly for the engineers who are responsible for implementing the changes required; the result is often a one-off effort rather than fundamentally changing the overall culture.] The Not-Ready-For-Prime-Time Implementation This is less about an individual or role but rather an organizational culture problem: proof-of-concepts assumed to be production-ready. Software proofs-of-concept (POCs) are created to test new business concepts or determine the usefulness or applicability of new technology. POCs should be created with minimal engineering rigor that allows a quick and cheap implementation to be discarded without guilt once the results are evaluated. Most important, it is not intended to be a workable product. Despite these clear expectations, too often, I’ve seen the business get excited at seeing the POC and want it available to customers immediately. The POC might be slightly enhanced or it might be unaltered, but it’s out there for the world (internal or external) to use. And when the problems start appearing – because, by definition, it was not intended for real-world usage – the finger-pointing begins. Agile advocates snigger and say You needed an MVP, silly! but my experiences are much the same as POCs: poor. By definition, an MVP is a complete application without the bells and whistles, but corners are inevitably cut: crawling (of crawl/walk/run paradigm) when current volumes require walk, run, or even fly; minimal/non-existent observability; non-standard user experience; incomplete or incorrect API definitions; security through obscurity; incomplete error handling. When leaders decide to move forward after a successful MVP, the expectation is to expand and enhance the MVP implementation; in fact, it may be better to start over. [I am not disavowing MVPs’ usefulness but rather am clarifying that organizations misuse/abuse the term and are, in fact, creating glorified POCs that are not complete, are not ready for users, and are not production ready. Just saying…] So when you next hear of an access application that is integrated into the enterprise supply chain workflow, don’t say I didn’t warn you. Organizations who make ignorant decisions on the production-readiness of applications shouldn’t know why failures occur later, yet they do, and the engineers are left to pick up the pieces. What Can You Do? It’s not hopeless, really. It isn’t …. not necessarily fun, but there are strategies that you can attempt. Gather Create a personal archive of articles, use cases, scenarios, and data that allows you to tell stories to non-technical people, helping them understand the tradeoffs present in all organizations. Internally, you might be interested in estimated vs. actual effort for feature delivery, production failure rates, or implementation costs mapped to the customer base. Are cloud costs increasing faster than customer growth? Did assumptions made during initial implementation impact the ability to deploy future features, whether positive or negative? Is supposedly important work upended by unknown and unplanned initiatives? Did a potential security breach impact customer confidence? What was the cost of researching a potential security breach? Is data quality affecting your reporting, analytics, and billing? There are many different ways to try and understand what’s happening within your organization. Almost daily, there are new articles online that highlight the issues and problems other organizations experience: Southwest’s 2022 holiday meltdown, a ransomware attack on Vital Care Providers, and Cloudfare’s bad software deployment. Not every organization publishes postmortems, but details often leak through other channels. Perhaps more importantly, your organization doesn’t want to appear in those articles! Educate As most non-technical folks appear unable or unwilling to accept that software is hard, our responsibility – for better or worse – is to show and explain. Unique situations require adjusting the story told, but it is necessary – and never-ending – to have any chance to get the organization to understand: explaining how software is developed and deployed, demonstrating how a data-driven organization requires quality data to make correct decisions, explaining the advantages and disadvantages of leveraging open source solutions; showing examples of how open source licenses impact your organization’s intellectual property. Look for opportunities to inject background and substance when appropriate, as education is open-ended and never-ending. Often, it will appear no one is listening as you repeat yourself, but eventually – hopefully – someone will parrot what you’ve been saying for months. Negotiate Aside from those employed in purely research and development roles, engineering/technology for engineering/technology's sake is not feasible, as technology concerns must be balanced with business concerns: product and its competitors, sales pipeline, customer support and feature requests, security, privacy, compliance, etc. Each decision has its short- and long-term impacts, and it is very unlikely that all involved will be pleased. Sorry, but that’s corporate politics. That does not mean you roll over and play dead, but rather horse trade, often with management and product, to ensure the technical concerns aren’t forgotten: Ensure that changes in business priorities are coupled with impact analysis on in-process development efforts; Accept less-than-optimal initial implementations with the agreement of fast-follow work to address compromises; Define metrics that identify when technology-focused work should be prioritized over feature work. These ideas may or may not apply to your organization or situation, but hopefully, they will give you ideas that may be pursued. Conclusion The problems I’ve discussed are age-old and have seemed to become worse in recent decades, so I’m not sure if any of what I’ve discussed is a surprise. Perhaps this is only the latest incarnation of the problem and post-Agile a new approach will reap benefits. Perhaps leaders will acknowledge that engineers really do understand the problems and are trusted to implement a solution rather than given solutions that fit an arbitrary (and often unrealistic) timeline. It’s a tug-of-war that I don’t yet see resolved. Image Credits “Pointy Hair Boss” © Scott Adams “Productivity: Putting the Kanban Display Together” by orcmid is licensed under CC BY 2.0. “Analog circuit board prototype” by mightyohm is licensed under CC BY-SA 2.0.
Test automation is essential for ensuring the quality of software products. However, test automation can be challenging to maintain, especially as software applications evolve over time. Self-healing test automation is an emerging concept in software testing that uses artificial intelligence and machine learning techniques to enable automated tests to detect and self-correct issues. This makes test automation more reliable and cost-effective and reduces the amount of time and resources required to maintain test scripts. In this article, we will discuss the benefits of self-healing test automation, how it works, and how to implement it in your organization. What Is Self/Auto-Healing Test Automation? Self-healing test automation is a new approach to test automation that uses artificial intelligence (AI) and machine learning (ML) to make test scripts more robust and adaptable. With self-healing test automation, test scripts can automatically detect and repair themselves when changes are made to the application under test, including shifting layouts and broken selectors. This makes it possible to automate tests for complex applications with frequently changing user interfaces without having to constantly maintain and update the test scripts. Why Is Self-Healing Test Automation Necessary? Test automation scripts can easily break when changes are made to the user interface. This is because test automation scripts are typically designed to interact with specific elements on the screen, such as buttons, text fields, and labels. When these elements change, the script may no longer be able to find them or interact with them correctly. This can lead to test failures and false positives, which can be time-consuming and frustrating to resolve. Also, the user interfaces are constantly evolving, with new features and bug fixes being added frequently. This means that test automation scripts need to be updated regularly to adapt to these changes. However, updating test automation scripts can be a manual and time-consuming process, which can be challenging to keep up with the pace of change. Self-healing test automation addresses this fragility of traditional test automation scripts by adapting to changes in the user interface automatically to make test scripts more robust and adaptable. Self-healing test scripts can automatically detect and repair themselves when changes are made to the application under test, which can help to reduce test maintenance costs, improve test quality, and increase test coverage. How Does Self-Healing Mechanism Work? Step 1: The self-healing mechanism gets triggered whenever “NoSuchElement” or a similar error occurs for an element mentioned in the automation script. Step 2: The algorithm analyzes the test script to identify the root cause of the error. Step 3: The algorithm uses AI-powered data analytics to identify the exact object in the test script that has changed. An object can be any interface item like a webpage, navigation button, or text box. Step 4: The algorithm updates the test script with the new identification parameters for the affected object(s). Step 5: The updated test case is re-executed to verify that the remediation has been successful. How Self-Healing Test Automation Adds Value to Your Software Delivery Process Leveraging self-healing capabilities allows test automation to adapt to changes, improving test coverage, reducing maintenance efforts, and enabling faster feedback. Saves Time and Effort Self-healing test automation can save organizations a significant amount of time and effort in software testing. Traditional test automation approaches require manual intervention to fix errors or failures that occur during test execution. This can be a time-consuming and error-prone process, especially when dealing with large and complex test suites. Self-healing test automation eliminates the need for manual intervention, allowing tests to recover automatically from failures or errors. Improves Test Coverage Self-healing test automation can help to improve test coverage by allowing testers to focus on writing new tests rather than maintaining existing tests. This is because self-healing tests can automatically adapt to changes in the software under test, which means that testers do not need to spend time updating their tests every time the software changes. As a result, testers can focus on writing new tests to cover new features and functionality. Self-healing automation can improve test coverage by 5-10% by eliminating unnecessary code, resulting in shorter delivery times and higher returns on investment. Prevents Object Flakiness Object flakiness is a common problem in test automation, especially for GUI testing. Object flakiness occurs when a test fails because it is unable to locate an object on the page. This can happen for a variety of reasons, such as changes to the UI, changes to the underlying code, or network latency. Self-healing test automation can detect and prevent object flakiness by analyzing test results and identifying patterns that indicate flaky tests. By preventing object flakiness, teams can reduce the number of false positives and negatives, improving the overall accuracy and reliability of test results. Faster Feedback Loop Self-healing test automation also enables a faster feedback loop. With traditional test automation approaches, tests are often run manually or through a continuous integration pipeline. However, with self-healing test automation, tests can be run continuously, providing immediate feedback on the quality of the application under test. This enables teams to identify and fix issues faster, improving the overall quality and reliability of the application. Conclusion In Agile methodology, applications are continuously developed and tested in short cycles. This can make it difficult to maintain test cases, as the application is constantly changing. Self-healing test automation can help to overcome this challenge by automatically updating test cases when the application under test changes.
Get ready for a special journey with "Code Wisdom 2024," a guide for developers who want to shine in the world of software development. Enlightenment in development is realising that every bug is a lesson, every error is an opportunity, and every line of code is a chance to create something meaningful. Imagine your year full of wise thoughts, where every month brings a new idea to light up your coding path. These twelve quotes are not just words; they're like a compass showing you the way to becoming an excellent coder, and a leader. Your journey and your code are about to shine with brilliance. Let's see how these ideas can help developers improve their skills, be more creative, and work better together. Take this challenge, set your goals, and be ready for a year of coding like never before! January: #CodeGoals2024 In the realm of coding, set achievable goals for the year. Each line of code is a step toward mastery. Setting achievable goals is like creating a roadmap for your coding journey. If you consider your project as an epic, then divide it into small stories which are achievable. Plan realistic deadlines, and never forget to celebrate success. The achievement you will get is a sense of accomplishment and this will keep your motivation high. Achievements You will improve your time management skills. Focus and productivity will be enhanced. Motivation will be high as the wins will be consistent February: #TeamCollaboration Like the code you write, relationships in a team need constant debugging and collaboration. "Be a team player." You might have got this advice many times. Just like debugging the code, always debug the team dynamics by active listening, regular communication, and prompt conflict resolution. There should be harmony and you should be one of the biggest contributors to that. Achievements Collaboration within the team will strengthen. Fewer conflicts; more celebration of success Job satisfaction March: #TechEvolution As the seasons change, so should your tech stack. Embrace new technologies for continuous growth. Change is the only constant in your world of variables. Embrace change and stay abreast of the tech trends. Some ways to do so can be attending communities, writing blogs, taking sessions, and sharing knowledge. There is always a better way to solve a problem, so go for it. Achievements Enhance skill set Efficient problem solver Stepping towards an industry leader April: #CodePoetry Code is like poetry – elegant and expressive. Strive for code that tells a story and captivates its readers. Be a storyteller through your code. Add clarity to your code and be a hero for future readers for same. Clean code is a great way to learn and implement the same. Believe me, maintenance is easier for code that is clean because of the readability. Achievements Improved collaboration with code readers Faster onboarding of new members of the team Fewer errors and hence, low maintenance May: #CodeRefactoring In the garden of development, refactor your code like pruning branches – for a healthier, more robust project. Keep reviewing your codebase and follow the Boy Scout rule to make the codebase better every time you visit the code. Remove unnecessary complexities and dead code. Make sure the code base is robust as it grows. Achievements Improved code maintainability Less technical debt Scalability also will be affected positively June: #SkillSetExpansion As the days lengthen, so should your skill set. Dive into new frameworks and languages. Be a lifelong learner, and explore new languages and frameworks to broaden your skill set. You might end up learning to solve a problem in many ways and this will make you versatile and help in adapting various project requirements. Achievements Broader range of project opportunities Improved perspective on problem-solving You can market yourself more July: #QualityCode Hot fixes are like ice cream – necessary, but too many can lead to a sticky situation. Aim for quality code. Testing what you have written will save you from small quick fixes. Targeting the root cause of issues, going through thorough testing, and implementing solutions will improve the robustness of the codebase. Achievements Less chance of new bugs Stability of the project A confident codebase August: #CodeExplorer In the vast landscape of development, be an explorer. Seek challenges, solve problems, and grow. You always look for a comfort zone but make sure it does not affect you in your growth. Go for that new blog you wanted to read, the new tech/tools you want to explore, and build that one POC. Each challenge is an opportunity for growth and learning. Achievements Personal and professional development Problem-solving skills Confident to take on diverse challenges September: #PatienceInCoding Debugging is like solving a puzzle. Approach it with patience, and the solution will reveal itself. Analyze a problem systematically, have patience, and be cool and calm. A good debugging tool is a must but at the same time never forget to talk to your colleague about the issue. You might end up with more effective solutions. Achievements Less stress Problem-solving skills Better at RCA (Root Cause Analysis) October: #CodeScalability Like a seasoned architect, design your codebase with scalability in mind. Lay the foundation for future success. When you wear the hat of an architect, you should always think about all the abilities and performance, and anticipate future requirements. A robust foundation gives you long-term success in anything you build Achievements Address NFRs (Non-Functional Requirements) Reduce the chaos Adaptability increases for changing requirements. November: #TeamGratitude In the spirit of gratitude, appreciate your team. A collaborative environment fosters innovation. When it comes to saying thank you, never leave any stones untouched. Be grateful. When you build a collaborative environment and ideas are shared freely, each member feels valued. Creativity and innovation are the result of a positive team culture. Achievements Improved team morale Job satisfaction Inner growth December: #YearReflections As the year concludes, reflect on your journey. Celebrate victories, learn from defeats, and enter the new year wiser. Never forget to acknowledge how far you have come and what progress you've made. Celebrate your successes - I mean every bit of them. Use this learning from the 12 steps that you have taken, be grateful, and start designing your next year. Set a vision for your career and take small steps again. Nothing succeeds like success. Achievements Self-awareness Personal growth Increase resilience Conclusion 2024 is a year filled with opportunities for your growth! set goals, work together, try new things in tech, write code with grace, always improve, learn more skills, care about quality, be an explorer, handle challenges patiently, think about scalability, value your team, and take a moment to look back on your journey. Your commitment to growing will not only make you a senior member but also inspire others in their coding journeys. I wish you joy in your coding endeavors and a great new year with new opportunities ahead. “One of my most productive days was throwing away 1000 lines of code.” - Ken Thompson Enjoy these other related articles: 16 Best Programming Quotes Quotes Every Software Engineer Should Know More Good Programming Quotes Happy coding! It's February already! :)
The history of DevOps is definitely worth reading in a few good books about it. On that topic, “The Phoenix Project,” self-characterized as “a novel of IT and DevOps,” is often mentioned as a must-read. Yet for practitioners like myself, a more hands-on one is “The DevOps Handbook” (which shares Kim as an author in addition to Debois, Willis, and Humble) that recounts some of the watershed moments around the evolution of software engineering and provides good references around implementation. This book actually describes how to replicate the transformation explained in the Phoenix Project and provides case studies. In this brief article, I will use my notes on this great book to regurgitate a concise history of DevOps, add my personal experience and opinion, and establish a link to Cloud Development Environments (CDEs), i.e., the practice of providing access to and running, development environments online as a service for developers. In particular, I explain how the use of CDEs concludes the effort of bringing DevOps “fully online.” Explaining the benefits of this shift in development practices, plus a few personal notes, is my main contribution in this brief article. Before clarifying the link between DevOps and CDEs, let’s first dig into the chain of events and technical contributions that led to today’s main methodology for delivering software. The Agile Manifesto The creation of the Agile Manifesto in 2001 sets forth values and principles as a response to more cumbersome software development methodologies like Waterfall and the Rational Unified Process (RUP). One of the manifesto's core principles emphasizes the importance of delivering working software frequently, ranging from a few weeks to a couple of months, with a preference for shorter timescales. The Agile movement's influence expanded in 2008 during the Agile Conference in Toronto, where Andrew Shafer suggested applying Agile principles to IT infrastructure rather than just to the application code. This idea was further propelled by a 2009 presentation at the Velocity Conference, where a paper from Flickr demonstrated the impressive feat of "10 deployments a day" using Dev and Ops collaboration. Inspired by these developments, Patrick Debois organized the first DevOps Days in Belgium, effectively coining the term "DevOps." This marked a significant milestone in the evolution of software development and operational practices, blending Agile's swift adaptability with a more inclusive approach to the entire IT infrastructure. The Three Ways of DevOps and the Principles of Flow All the concepts that I discussed so far are today incarnated into the “Three Ways of DevOps,” i.e., the foundational principles that guide the practices and processes in DevOps. In brief, these principles focus on: Improving the flow of work (First Way), i.e., the elimination of bottlenecks, reduction of batch sizes, and acceleration of workflow from development to production, Amplifying feedback loops (Second Way), i.e., quickly and accurately collect information about any issues or inefficiencies in the system and Fostering a culture of continuous learning and experimentation (Third Way), i.e., encouraging a culture of continuous learning and experimentation. Following the leads from Lean Manufacturing and Agile, it is easy to understand what led to the definition of the above three principles. I delve more deeply into each of these principles in this conference presentation. For the current discussion, though, i.e., how DevOps history leads to Cloud Development Environments, we just need to look at the First Way, the principle of flow, to understand the causative link. Chapter 9 of the DevOps Handbook explains that the technologies of version control and containerization are central to implementing DevOps flows and establishing a reliable and consistent development process. At the center of enabling the flow is the practice of incorporating all production artifacts into version control to serve as a single source of truth. This enables the recreation of the entire production environment in a repeatable and documented fashion. It ensures that production-like code development environments can be automatically generated and entirely self-serviced without requiring manual intervention from Operations. The significance of this approach becomes evident at release time, which is often the first time where an application's behavior is observed in a production-like setting, complete with realistic load and production data sets. To reduce the likelihood of issues, developers are encouraged to operate production-like environments on their workstations, created on-demand and self-serviced through mechanisms such as virtual images or containers, utilizing tools like Vagrant or Docker. Putting these environments under version control allows for the entire pre-production and build processes to be recreated. Note that production-like environments really refer to environments that, in addition to having the same infrastructure and application configuration as the real production environments, also contain additional applications and layers necessary for development. Developers are encouraged to operate production-like environments (Docker icon) on their workstations using mechanisms such as virtual images or containers to reduce the likelihood of execution issues in production. From Developer Workstations to a CDE Platform The notion of self-service is already emphasized in the DevOps Handbook as a key enabler to the principle of flow. Using 2016 technology, this is realized by downloading environments to the developers’ workstations from a registry (such as DockerHub) that provides pre-configured, production-like environments as files (dubbed infrastructure-as-code). Docker is often a tool to implement this function. Starting from this operation, developers create an application in effect as follows: They access and copy files with development environment information to their machines, Add source code to it in the local storage, and Build the application locally using their workstation computing resources. This is illustrated in the left part of the figure below. Once the application works correctly, the source code is sent (“pushed) to a central code repository, and the application is built and deployed online, i.e., using Cloud-based resources and applications such as CI/CD pipelines. The three development steps listed above are, in effect, the only operations in addition to the authoring of source code using an IDE that is “local,” i.e., they use workstations’ physical storage and computing resources. All the rest of the DevOps operations are performed using web-based applications and used as-a-service by developers and operators (even when these applications are self-hosted by the organization.). The basic goal of Cloud Development Environments is to move these development steps online as well. To do that, CDE platforms, in essence, provide the following basic services, illustrated in the right part of the figure below: Manage development environments online as containers or virtual machines such that developers can access them fully built and configured, substituting step (1) above; then Provide a mechanism for authoring source code online, i.e., inside the development environment using an IDE or a terminal, substituting step (2); and finally Provide a way to execute build commands inside the development environment (via the IDE or terminal), substituting step (3). Figure: (left) The classic development data flow requires the use of the local workstation resources. (right) The cloud development data flow replaced local storage and computing while keeping a similar developer experience. On each side, operations are (1) accessing environment information, (2) adding code, and (3) building the application. Note that the replacement of step (2) can be done in several ways. For example, for example, the IDE can be browser-based (aka a Cloud IDE), or a locally installed IDE can implement a way to remotely author the code in the remote environment. It is also possible to use a console text editor via a terminal such as vim. I cannot conclude this discussion without mentioning that, often multiple containerized environments are used for testing on the workstation, in particular in combination with the main containerized development environment. Hence, cloud IDE platforms need to reproduce the capability to run containerized environments inside the Cloud Development Environment (itself a containerized environment). If this recursive process becomes a bit complicated to grasp, don’t worry; we have reached the end of the discussion and can move to the conclusion. What Comes Out of Using Cloud Development Environments in DevOps A good way to conclude this discussion is to summarize the benefits of moving development environments from the developers’ workstations online using CDEs. As a result, the use of CDEs for DevOps leads to the following advantages: Streamlined Workflow: CDEs enhance the workflow by removing data from the developer's workstation and decoupling the hardware from the development process. This ensures the development environment is consistent and not limited by local hardware constraints. Environment Definition: With CDEs, version control becomes more robust as it can uniformize not only the environment definition but all the tools attached to the workflow, leading to a standardized development process and consistency across teams across the organization. Centralized Environments: The self-service aspect is improved by centralizing the production, maintenance, and evolution of environments based on distributed development activities. This allows developers to quickly access and manage their environments without the need for Operations manual work. Asset Utilization: Migrating the consumption of computing resources from local hardware to centralized and shared cloud resources not only lightens the load on local machines but also leads to more efficient use of organizational resources and potential cost savings. Improved Collaboration: Ubiquitous access to development environments, secured by embedded security measures in the access mechanisms, allows organizations to cater to a diverse group of developers, including internal, external, and temporary workers, fostering collaboration across various teams and geographies. Scalability and Flexibility: CDEs offer scalable cloud resources that can be adjusted to project demands, facilitating the management of multiple containerized environments for testing and development, thus supporting the distributed nature of modern software development teams. Enhanced Security and Observability: Centralizing development environments in the Cloud not only improves security (more about secure CDEs) but also provides immediate observability due to their online nature, allowing for real-time monitoring and management of development activities. By integrating these aspects, CDEs become a solution for modern, in particular cloud-native software development, and align with the principles of DevOps to improve flow, but also feedback, and continuous learning. In an upcoming article, I will discuss the contributions of CDEs across all three ways of DevOps. In the meantime, you're welcome to share your feedback with me.
The Azure Solutions Architect Associate certification is a highly regarded credential for Microsoft Azure professionals. Microsoft Azure is a premier cloud platform that offers a diverse set of cloud services. The Azure applications Architect Associate certification validates your knowledge in designing Azure-based applications. We’ll walk you through the certification test in this post, including advice, resources, and a study plan to help you succeed. Understanding the Azure Solutions Architect Associate Exam Before we dive into the preparation process, it’s essential to understand the Azure Solutions Architect Associate exam format and content. The certification exam assesses your knowledge and practical skills in the following areas: Deploy and Configure Infrastructure: Design and implement solutions that deploy and configure infrastructure. Implement Workloads and Security: Implement and manage workloads and security in Azure. Create and Deploy Apps: Create and deploy applications in Azure. Implement Authentication and Secure Data: Implement authentication and secure data solutions. The Azure Solutions Architect Associate exam is designed to evaluate your ability to design solutions that meet specific business objectives in Azure. It consists of multiple-choice questions, case studies, and practical tasks. To pass the exam, you need to score at least 700 out of 1000 points. Preparing for the Azure Solutions Architect Associate Exam Effective preparation is key to passing the Azure Solutions Architect Associate exam. Here’s a step-by-step guide to help you get ready: 1. Understand the Exam Curriculum Review the Azure Solutions Architect Associate curriculum. Understand the topics and skills required for the certification. Take note of the specific domains and their weightage in the exam. 2. Set Up an Azure Environment You’ll need access to an Azure environment to practice. If you don’t have access to an Azure subscription, you can sign up for a free Azure account. Azure provides a set of free services each month, allowing you to practice and gain hands-on experience. 3. Learn Azure Fundamentals Before diving into exam-specific tasks, ensure you have a strong understanding of Azure fundamentals. Microsoft provides free learning resources through Azure Learn, which includes documentation, tutorials, and interactive modules. Start with the official Azure documentation. 4. Take Online Courses Consider enrolling in online courses that specifically prepare you for the Azure Solutions Architect Associate exam. Platforms like Udemy, Pluralsight, Coursera, and edX offer comprehensive courses that cover the required skills. 5. Practice, Practice, Practice The Azure Solutions Architect Associate exam assesses your practical skills. Practice by working on hands-on labs, case studies, and real-world scenarios. Create and deploy resources on Azure to reinforce your knowledge. 6. Use Azure Documentation The official Azure documentation is a valuable resource during the exam. Learn how to navigate and search the Azure documentation to quickly find the information you need. 7. Understand Azure Services Become familiar with Azure services, their use cases, and their capabilities. You should be comfortable with various Azure services, including Virtual Machines, Azure App Service, Azure Kubernetes Service (AKS), Azure Functions, and more. 8. Simulate Exam Conditions Set up a practice environment that mirrors the Azure Solutions Architect Associate exam conditions. Time yourself and practice under time constraints to get used to the real exam’s pressure. 9. Take Practice Exams Many online platforms offer practice exams for the Azure Solutions Architect Associate certification. These practice tests simulate the real exam and help you assess your readiness. Take multiple practice exams to gauge your progress. 10. Learn Test-Taking Strategies During the exam, you’ll need to manage your time effectively. Learn test-taking strategies such as reading questions carefully, eliminating incorrect options, and budgeting your time wisely. 11. Join Study Groups Online study groups, forums, and communities can be beneficial for sharing knowledge and tips. Join Azure Solutions Architect Associate study groups on platforms like Reddit, LinkedIn, or the Microsoft Learn Community to connect with others preparing for the exam. 12. Take Breaks and Rest Studying for the Azure Solutions Architect Associate exam can be intense. Take regular breaks to relax and avoid burnout. A well-rested mind performs better during the exam. Azure Solutions Architect Associate Exam Day Tips On the day of the Azure Solutions Architect Associate exam, here are some tips to keep in mind: Read the Instructions: Carefully read the instructions provided at the beginning of the exam. Understand the requirements and constraints. Start with Easier Questions: Begin with questions you are confident about to build momentum and boost your confidence. Manage Your Time: Keep an eye on the clock and allocate time for each question or task. Don’t get stuck on a single question for too long. Flag Questions: If you’re unsure about a question, flag it for later review and move on to the next. You can come back to it. Check Your Work: Before submitting your answers, review your work. Ensure that you’ve completed all questions and that they meet the requirements. Stay Calm: Don’t panic if you encounter challenging questions. Take a deep breath; if necessary, skip a challenging question and return later. Use Azure Portal: During the exam, you can access the Azure portal and Azure documentation. Learn how to use these resources effectively. Submit Answers Carefully: When you’re confident in your answers, submit them. Ensure that you don’t accidentally submit incomplete or incorrect solutions. Azure Solutions Architect Associate Resources To help you prepare effectively for the Azure Solutions Architect Associate exam, here are some recommended resources: Azure Solutions Architect Associate Curriculum: The official curriculum from Microsoft. Azure Documentation: The official Azure documentation is a valuable reference for exam preparation. Azure Learn: Microsoft’s free learning resources with documentation, tutorials, and interactive modules. Online Courses on Platforms like Udemy, Pluralsight, Coursera, and edX. Practice Exams from reputable sources to assess your readiness. Advanced Study Resources For those looking to delve deeper into Azure and excel in the Solutions Architect role, consider exploring these advanced resources: Microsoft Certified: Azure Solutions Architect Expert The Azure Solutions Architect Associate is a stepping stone to the Azure Solutions Architect Expert certification. This advanced certification focuses on more complex solutions and architectural designs. Earning this certification can open doors to higher-level roles and responsibilities in cloud architecture. Azure Architecture Center The Azure Architecture Center is a comprehensive resource for designing and implementing solutions on Azure. It provides architecture best practices, reference architectures, and design patterns to help you build robust and scalable solutions. Azure Training Paths Microsoft offers a range of Azure training paths to help professionals acquire deeper expertise. These training paths cover specific areas like AI and machine learning, IoT, and more. Azure Community Join the Azure community to connect with other professionals, architects, and experts in the field. Platforms like GitHub, Stack Overflow, and LinkedIn offer opportunities to share knowledge and collaborate on Azure projects. Conclusion Obtaining the Azure Solutions Architect Associate certification is a key step in your cloud architecture career. It displays your proficiency in designing and implementing solutions on Microsoft Azure, a major cloud platform. Passing the exam requires thorough preparation, which includes extensive study, practical experience, and good test-taking tactics. You may successfully prepare for the Azure Solutions Architect Associate test, demonstrate your abilities, and advance your career by following the methods and advice indicated in this book. Microsoft Azure is a prominent cloud provider, and qualified personnel are in high demand. You may position yourself for intriguing career prospects and contribute to the success of organizations that use Azure for their cloud solutions by getting the Azure Solutions Architect Associate certification.
Have you ever found yourself in the position of a test engineer embedded in one of the Agile engineering teams? While you have daily interactions with peers, connecting with them on a profound level for the successful execution of job duties might be challenging. Although there is a shared goal to release features successfully, we often experience isolation, especially while others, like developers, find comfort within the team. In the realm of dispersed Agile teams with time zones adding an extra layer of complexity, the longing for a team to resonate with, connect with, and brainstorm on all test automation challenges is prevalent. In the expansive landscape of test automation, the creation of an automation guild is more than just collaboration; it stands as a testament to the resilience of SDETs working across diverse time zones and Agile teams. Through this guide, I aim to share the benefits and challenges overcome, the enrichment of test engineers or SDETs, and the establishment of a collective force dedicated to advancing excellence in testing. Breaking Silos In a world where time zones separate teams and Agile methodologies dictate the rhythm of development, test engineers face a unique challenge. Even though they are part of an Agile team with a shared goal, i.e., successful release, they must navigate independently without a clear direction or purpose. The guild, however, becomes a bridge across these temporal gaps, offering a platform for asynchronous collaboration. It not only allows them to demo their progress, accomplishments, and new utility that can be leveraged by others but also their challenges and blockers. It will surprise you to see how often those obstacles are common among other guild members. Now that they have each other, all heads come together to brainstorm and find common, effective solutions for any testing problem. Fostering Through Training and Contribution As important as regular guild meet-ups and collective commitment are, continuous learning and training initiatives are equally vital to empower test engineers to contribute effectively. From workshops on emerging testing methodologies to skill-building webinars, the guild evolves into a learning haven where members grow together, ensuring each test engineer is equipped to make a meaningful impact. It enhances members’ efficiency by reducing redundant efforts. Understanding what others are working on and what tools are available for use, such as common utilities and shared definitions, enables them to save time by avoiding duplication of efforts and contribute more effectively. This isn’t just about individual efficiency; it’s a strategic move toward collective empowerment. Grow Your Network and Your Profile Within the guild, networking is not confined to individual teams. It offers the creation of a network that spans across Agile teams, allowing Test Engineers to understand overall solutions from diverse perspectives. This isn’t just about sharing knowledge; it’s about broadening domain knowledge. Turning new members into seasoned members who can then mentor new juniors, ensuring that the guild is not just a community but a mentorship ecosystem that thrives on collective wisdom. If there’s one aspect that has been repeatedly demonstrated in the guild, it would be that challenges are not roadblocks but opportunities for innovation and collaboration. The guild stands as a testament to the fact that, even in the world of test automation, where distances and time zones pose challenges, excellence can be achieved through collective strength. Automation guild is not just about crafting code; it’s about crafting a community that advances excellence in testing, collectively and collaboratively. The future, as envisioned through the chronicles, is one where Test Engineers, regardless of time zones, work seamlessly in a guild that stands as a beacon of innovation, knowledge-sharing, and collective growth.
In this article, we are going to look at the challenges faced when rapidly scaling engineering teams in startup companies as well as other kinds of companies with a focus on product development. These challenges change between different types of companies, sizes, and stages of maturity. For instance, the growth of a consultancy software company focused on outsourcing is so different from a startup focused on product development. I've faced much team growth and also seen the growth of teams in several companies, and most of them have faced the same challenges and problems. Challenges The following are some of the challenges or problems that we will have to address in high-growth scenarios: Growth is aligned with productivity: many companies grow, but the output is unfortunately far from the goals. Avoid team frustration due to failure to achieve growth goals. Avoid too much time being consumed with the hiring process for the engineering teams. Avoid the demotivation of newcomers due to chaotic onboarding processes: the onboarding process is the first experience in the company. Maintain and promote the cultural values defined by the organization. The impact on delivery is aligned with the defined goals and risks. New hires meet expectations and goals in terms of value contribution. Navigating the Challenges Goals Goals are the main drivers of the growth strategy. They need to be challenging, but also realistic, and linked to mid-term and long-term vision. Challenging: Push the team to go beyond their comfort zone and strive for excellence. It requires effort, innovations, planning, and agility. Realistic: Ensure the goals can be achieved to avoid lead with frustration and burnout. The growth of the company and its success have to enhance the motivation and inspiration of the team. Long-term: Goals have to be aligned with the company's long-term vision and in a wide range. Large growth cannot be organized with the next three months in mind, because that may be the time it takes to find suitable candidates. Goals have to be measurable, clear, and specific to: Promote accountability Evaluate and measure the goal's success Take data-driven decisions All growth requires dedication and effort from the team; time that they will not dedicate to product evolution or development. Example: Unrealistic Goal Let's suppose we have a team of 10 engineers divided into 2 squads: backend and platform. The company set the following goals: Triplicate the team in 1 year, from 10 to 30 engineers. Keep the delivery performance. Create three news squads: DevOps, Data Platform, and Front End. Promote the culture. Only hire top-tier engineers. Most likely, the number of candidates we will have to evaluate in interviews and technical exercises will be at best four candidates for each position in addition to the time dedicated to the onboarding process. Usually, there is more than one engineer collaborating in the hiring process so we are likely to have a significant impact on delivery. Finding a team of experienced and highly qualified people is not an easy task. It is necessary to define what we consider "talent" and the different levels at which we can hire. Maintaining and promoting the culture in a high-growth environment where in one year there are more new people than the team we have is very complex and requires a good strategy, definition of objectives, and agility in decision-making. With this, we want to reflect that one of these objectives would already be ambitious - but all of them together make it practically impossible to achieve success. Talent Acquisition and Hiring Process The talent acquisition team plays a crucial role in a company's growth strategy, but they need the support of all of the company. C-Levels and hiring managers have to provide all the support and be involved as the same team. Clear Communication Foster open and clear communication between teams to ensure that everyone understands the goals and the role each team plays in the process. Review Pipeline Quality Sometimes many candidates go through the early stages of the pipeline and are ultimately discarded, and this generates a lot of frustration in the engineering team because the analysis of each candidate requires significant effort. It is important to adjust the requirements and search criteria for candidates in the early stages of the pipeline and this requires constant communication between the teams. Market Knowledge Talent acquisition teams should provide insights into market trends and competitor strategies. This knowledge provides important information to the company to define the expectations and strategy and stay ahead in the market. Cultural Values It is important to keep in mind that each engineer who joins our team brings his or her own culture based on factors such as work experience, personality, or the country where they live. Although these people fit the cultural pattern we are looking for, most of the time they do not have the culture of the company, and the hiring process is not reliable. If maintaining the culture is important to the company, we need to mentor new employees starting with the recruitment process itself. Promote values in the hiring process. Promote values in the company and team onboarding process. Promote values during the first years through the mentoring process. Promoting the cultural values and the company's goal are tasks that must be done continuously, but we must reinforce and review them with new hires more frequently. On-Boarding In my opinion, the onboarding process has a huge role in almost all companies and is not given enough attention. It is especially important in high-growth companies. The two main problems are: No onboarding process: Onboarding is focused on a meeting with human resources, another with the manager, and finally the team: a three-hour process. This can only be considered as a welcome meeting. Highly technical processes: Processes very oriented to perform your first deployment and that mainly promote knowledge silos and little engagement with the company. The onboarding process must be led by the organization. It must be structured and must encourage a smooth integration of new hires into the organization, minimizing disruptions and maximizing productivity over time. In addition, the entire onboarding process should be a step-by-step process with as much documented support as possible. This would be a base structure for a complete onboarding process: Pre-boarding: It includes all the activities that occur between the acceptance of the offer and the new hire's first day. Continuous communication is important because it promotes motivation and cultural values and helps to create a feeling within the company. Welcome Day: Welcome meeting, company overview, review of company policies and cultural values Paperwork, documentation, and enrollment processes Initial equipment setup Introduction to Team and Manager Security training Company 360 (scheduled by month): 360-degree meetings with leaders from all departments provide valuable insights, foster collaboration, and help new employees understand the broader organizational context. Starting the first week: Cultural values and goals: The manager and the team share the same cultural values and team goals. The goals have to be clear and most of them measurable. Mentorship: Assign a mentor to support the integration process at least during the first year. Engineering Tech best practices and tools: Share the vision of architecture principles, DevOps, data principles, tools, and best practices of the organization. Roles-specific training Team integration: Start participating in team meetings. Feedback and evaluation: Feedback must always be continuous, honest, and constructive. We must put more emphasis on new hires to adjust goals, mentoring, or training. It would be better to start with one-to-one and include this evaluation and feedback in these sessions. Starting in the third month: Performance evaluation Continuous learning is part of the cultural values but at this time learning paths could be considered Initiate conversations about long-term career paths. It is important to avoid onboarding processes based solely on pairing or shadowing strategies because they require too much effort and also only generate silos and misalignment. These sessions are important but must be supported by documentation from both the organization and the team itself. Impact on Delivery The growth phase often requires a high investment of time, effort, and people in the hiring and onboarding process. Hiring process: Participating in technical sessions, reviewing candidate profiles, and reviewing technical exercises. Onboarding: The process of onboarding a new engineer to a team is always time-consuming and usually involves a learning curve until these people can offer more value than the effort invested in their integration. In the case of large growth, there may be situations in which teams are formed entirely by new engineers. This also has an impact on delivery, because these teams need: Mentors and support to adapt to the new environment Transversal coordination with other squads Talent Density In my opinion, growth should be based on the amount of talent and not on the number of engineers. At this point, there are a number of aspects to consider: What does talent mean to our organization? Finding talent is very complicated. There is a lot of competition in the market, people specialized in hiring processes, and the pressure to grow. Many people mistake talent for knowledge or years of experience. In my case, I have always given more value to the person's potential for the new role and for the organization rather than the experience in the role or the companies in which he/she has worked. The fit of a new hire is not only restricted to the hiring process but also to the evaluation period. Moreover, it is during the evaluation period that we can really evaluate the person. It is in this period when the decision is less painful for both parties, a person who does not fit in the organization will generate a negative impact both for him and for the organization. Team Topology These growth scenarios require changes in the organization and the creation of new teams or departments. Two fundamental factors must be taken into account: Team creation strategy Conway's Law Team Creation Strategy There are several strategies for developing the organization of teams: Integrate new hires into existing squads. Integrate new hires into existing squads and after some time, divide the team in two. Create entirely new teams with new hires. Create a new team from current leadership and new hires. The decision to apply a single approach or a combination of several approaches depends on several factors, including the organization's specific needs, resource availability, and long-term objectives. Conway's Law Conway's Law is a principle in software engineering and organizational theory: Any organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization's communication structure. Conway's Law suggests that the communication patterns, relationships, and team structures within an organization are reflected in the architecture, design, and interfaces of the software or systems they build. Summary The growth of engineering teams is one of the most complex challenges facing a growing organization, especially if this growth must be aligned with productivity and cultural goals. Hiring the number of people we have set as a target can be easy. Hiring the right people can be almost impossible and hiring a ratio of enough talented people is very difficult. This can only be done well if you work as a team.
In the first two parts of our series “Demystifying Event Storming,” we embarked on a journey through the world of Event Storming, an innovative approach to understanding complex business domains and software systems. We started by exploring the fundamentals of Event Storming, understanding its collaborative nature and how it differs from traditional approaches. In Part 2, we delved deeper into process modeling, looking at how Event Storming helps in mapping out complex business processes and interactions. Now, in Part 3, we will focus on the design-level aspect of Event Storming. This stage is crucial for delving into the technical aspects of system architecture and design. Here, we’ll explore how to identify aggregates – a key component in domain-driven design – and how they contribute to creating robust and scalable systems. This part aims to provide practical insights into refining system design and ensuring that it aligns seamlessly with business needs and objectives. Stay tuned as we continue to unravel the layers of Event Storming, providing you with the tools and knowledge to effectively apply this technique in your projects. Understanding The Visual Model The Event Storming Visual Model, as discussed in previous articles, depicts a dynamic and interactive model for system design, highlighting the flow from real-world actions to system outputs and policies. As previously explained, commands are depicted as decisive actions that trigger system operations, while events are the outcomes or results of those actions within the system. These concepts, which we’ve explored in earlier sections, help guide system design. Policies, which we’ve also covered in previous articles, serve as the guidelines or business rules that dictate how events are handled, ensuring system behavior aligns with business objectives. The read model, as previously discussed, represents the information structure affected by events, influencing future system interactions. Sketches and user inputs, as mentioned before, provide context and detail, enhancing the understanding of the system’s workings. Lastly, hotspots, which we’ve touched upon in prior discussions, are identified as critical areas needing scrutiny or improvement, often sparking in-depth discussions and problem-solving during an Event Storming session. This comprehensive model, as previously emphasized, underpins Event Storming’s utility as a collaborative tool, enabling stakeholders to collectively navigate and design complex software architectures. Abstraction Levels Event Storming is used to design set of software artifacts that enforce domain logic and bussiness consistency. - Alberto Brandolini In fact, Event Storming is a powerful technique used to map the intricacies of a system at varying levels of abstraction. This collaborative method enables teams to visualize and understand the flow of events, actions, and policies within a domain. Big Picture Level At the Big Picture Level of Event Storming, the primary goal is to establish an overarching view of the system. This stage serves as the foundation for the entire process. Participants collaborate to identify major domains or subdomains within the system, often referred to as “big picture” contexts. These contexts represent high-level functional areas or components that play essential roles in the system’s operation. The purpose of this level is to provide stakeholders with a holistic understanding of the system’s structure and architecture. Sticky notes and a large canvas are used to visually represent these contexts, with each context being named and briefly described. This visualization offers clarity on the overall system landscape and helps align stakeholders on the core domains and areas of focus. During this stage, participants also focus on identifying and documenting potential conflicts within the system. Conflicts may arise due to overlapping responsibilities, resource allocation, or conflicting objectives among different domains. Recognizing these conflicts early allows teams to address them proactively, minimizing challenges during the later stages of design and development. In addition to conflicts, participants at the Big Picture Level work to define the system’s goals. These goals serve as the guiding principles that drive the system’s design and functionality. Clear and well-defined goals help ensure that the subsequent design decisions align with the system’s intended purpose and objectives. Blockers, which are obstacles or constraints that can impede the system’s progress, are another key consideration at this level. Identifying blockers early in the process enables teams to devise strategies to overcome them effectively, ensuring smoother system implementation. Conceptual boundaries define the scope and context of each domain or subdomain. Understanding these boundaries is essential for ensuring that the system operates seamlessly within its defined constraints. The Big Picture serves as a starting point for addressing these elements, allowing stakeholders to gain insights into the broader challenges and opportunities within the system. This comprehensive view not only aids in understanding the system’s structure but also lays the groundwork for addressing these elements in subsequent levels of abstraction during Event Storming. Process Level The Process Level of Event Storming delves deeper into the specific business processes or workflows within each identified context or domain. Participants collaborate to define the sequence of events and actions that occur during these processes. The primary goal is to visualize and understand the flow of actions and events that drive the system’s behavior. This level helps uncover dependencies, triggers, and outcomes within processes, providing a comprehensive view of how the system operates in response to various inputs and events. Sticky notes are extensively used to represent events and commands within processes, and the flow is mapped on the canvas. This visual representation clearly shows how events and actions connect to achieve specific objectives, offering insights into process workflows. At the Process Level, it’s essential to identify the value proposition, which outlines the core benefits that the system or process delivers to its users or stakeholders. Understanding the value proposition helps participants align their efforts with the overall objectives and ensures that the designed processes contribute to delivering value. Policies represent the rules, guidelines, and business logic that govern how events are handled within the system. They define the behavior and decision-making criteria for various scenarios. Recognizing policies during Event Storming ensures that participants consider the regulatory and compliance aspects that impact the processes. Personas are fictional characters or user profiles that represent different types of system users. These personas help in empathizing with the end-users and understanding their needs, goals, and interactions with the system. Incorporating personas into the process level enables participants to design processes that cater to specific user requirements. Individual Goals refer to the objectives and intentions of various actors or participants within the system. Identifying individual goals helps in mapping out the motivations and expected outcomes of different stakeholders. It ensures that the processes align with the diverse goals of the involved parties. Design Level At the Design Level of Event Storming, the focus shifts to the internal behavior of individual components or aggregates within the system. Participants work together to model the commands, events, and policies that govern the behavior of these components. This level allows for a more granular exploration of system behavior, enabling participants to define the contracts and interactions between different parts of the system. Sticky notes continue to be utilized to represent commands, events, and policies at this level. These notes provide a detailed view of the internal workings of components, illustrating how they respond to commands, emit events, and enforce policies. The Design Level is crucial for defining the behavior and logic within each component, ensuring that the system functions as intended and aligns with business objectives. Identifying Aggregates Event Storming intricately intertwines with the principles and vocabulary of Domain-Driven Design (DDD) to model and elucidate technical concepts. We reach a crucial and often challenging aspect of Domain-Driven Design (DDD) – understanding and identifying aggregates. Aggregates, despite being a fundamental part of DDD, are commonly one of the least understood concepts by engineers. This lack of clarity can lead to significant pitfalls in both system design and implementation. Aggregates are more than just collections of objects; they represent carefully crafted boundaries around a set of entities and value objects. These boundaries are crucial for maintaining data integrity and encapsulating business rules. However, engineers often struggle with understanding the optimal size and scope of an aggregate, leading to either overly large aggregates that become bottlenecks or too many small aggregates that make the system unnecessarily complex. I recommend reading my separate article dedicated to understanding aggregates in DDD, which lays the foundation for the concepts we’ll explore here. In the intricate process of Design Level Event Storming, especially when identifying and defining aggregates for a complex system like a campervan rental service, the foremost step is to ensure the involvement of the right mix of people, and change people. This team should ideally be a blend of domain experts, who bring in-depth knowledge of the campervan rental business, and technical specialists, such as software developers and architects. Their combined insights are crucial in ensuring that the identified aggregates align with both business realities and technical feasibility. Additionally, including individuals with a focus on user experience is invaluable, particularly for aspects of the system that directly interact with customers. Once this diverse and knowledgeable team is assembled, a pivotal initial step is to revisit and reflect upon the insights gained from the Process Level. This stage is crucial as it provides a rich tapestry of information about the business workflows, key events, commands, and the intricate policies that were identified and explored previously. It’s at this juncture that a deep understanding of how the business operates comes to the forefront, offering a nuanced perspective that is essential for the next phase of aggregate identification and design. In Event Storming, the flow often goes from a command (an action initiated) to a domain event (a significant change or result in the system). However, there’s usually an underlying business rule that dictates how and why this transition from command to event happens. This is where blank yellow sticky notes come in. The blank yellow sticky note serves as a placeholder for the business rule that connects the command to the domain event. It represents the decision-making logic or criteria that must be satisfied for the event to occur as a result of the command. When a command and its corresponding domain event are identified, a blank yellow sticky note is placed between them. This signifies that there is a business rule at play, influencing the transition from the command to the event. The blank state of the sticky note invites team members, especially domain experts, to discuss and identify the specific rule or logic. This is a collaborative process where different perspectives help in accurately defining the rule. Through discussion, the team arrives at a clear understanding of the business rules. Participants are asked to fill in these business rules on the yellow sticky notes with comprehensive details about their execution. This involves several key aspects: Preconditions: What must be true before the rule is executed? For instance, before the Rent Campervan command can succeed, a precondition might be that the selected campervan must be available for the chosen dates. Postconditions: What becomes true after the rule is executed? Following the campervan rental, a postcondition would be that the campervan’s status changes to "rented" for the specified period. Invariants: What remains true throughout the execution of the rule? An invariant could be that a customer’s account must be in good standing throughout the rental process. Additional information: Any other clarifications or details that help in understanding what the business rule does Some business rules might be straightforward, but others could lead to extensive discussions. This interaction is a crucial part of the knowledge-sharing process. It allows domain experts to clarify complex business logic and developers to understand how these rules translate into system functionality. These discussions are invaluable for ensuring that everyone has a clear and shared understanding of how the business operates and how the system should support these operations. This process goes with an in-depth analysis of the various events and commands that had emerged in earlier stages. We noticed a distinct pattern: a cluster of activities and decisions consistently revolved around the management of campervans. The technique involves physically moving these similar business rules on top of one another on the board where the Event Storming session is visualized. This action is more than just an organizational step; it’s a method to highlight and analyze the interrelations and potential redundancies among the rules. This consolidation helps in clearly seeing how different rules interact with the same set of data or conditions. It can reveal dependencies or conflicts between rules that might not have been evident when they were considered in isolation. By grouping similar rules, you simplify the overall complexity of the system. It becomes easier to understand and manage the business logic when it’s seen through grouped, related rules rather than as a multitude of individual ones. This process can also uncover opportunities to refine or merge rules, leading to more streamlined and efficient business processes. Moreover, a closer look at the operational challenges and data cohesion associated with campervans solidified our thinking. We realized that managing various aspects related to campervans under a unified system would streamline operations, reducing complexities and enhancing service efficiency. The disparate pieces of information - maintenance schedules, booking calendars, location tracking - all pointed towards a need for an integrated approach. The decision to establish an aggregate was a culmination of these observations and discussions. It was a decision driven not just by operational logic but by the natural convergence of business activities related to campervans. By forming this aggregate, we envisioned a system where all aspects of a campervan’s lifecycle were managed cohesively, ensuring seamless operations and an enhanced customer experience. This approach also brought into focus the need for enforcing consistency across the campervan fleet. By designing an aggregate to encapsulate all aspects related to each vehicle, we ensured that any changes - be it a rental status update or a maintenance check - were consistently reflected across the entire system. This consistency is crucial for maintaining the integrity and reliability of our service. A campervan, for instance, should not be available for booking if it’s scheduled for maintenance. Similarly, the location information of each campervan needs to be accurate and up-to-date to ensure efficient fleet management. Imagine a scenario where a customer books a campervan for a journey from Munich to Paris. Within the aggregate, several pieces of information about each campervan are tracked and managed, including its current location, availability status, and maintenance schedule. When the customer selects a specific campervan for their dates, the aggregate immediately updates the vehicle’s status to "rented" for that period. This update is critical to ensure that the same campervan isn’t available for other customers for the same dates, preventing double rentings. Simultaneously, let’s say this chosen campervan is due for maintenance soon after the customer’s proposed return date. The system, adhering to the rules within the aggregate, flags this campervan for maintenance, ensuring that it does not get rented again before the maintenance is completed. Invariants, or rules that must always hold true, became a cornerstone in the design of the aggregate. These invariants enforce critical business rules and ensure the validity of the system at all times. For example, an invariant in our system ensures that a campervan cannot be simultaneously booked and marked as under maintenance. Such invariants are essential for maintaining data integrity and providing a consistent, reliable service to our customers. Let’s consider a real-life scenario to illustrate this: A family eagerly plans a summer road trip from Munich to the picturesque landscapes of Paris. They find the perfect campervan on our website and proceed to rent it for their adventure. Unseen to them, but crucial for their journey, is the role of our invariant at play. As soon as they select their dates and campervan, the system springs into action. It checks the aggregate, specifically probing for two critical conditions – is the chosen campervan already rented for these dates, and is it due for maintenance? This is where the invariant exerts its influence. It steadfastly ensures that this campervan is neither engaged in another journey nor scheduled for a maintenance check during the requested time. This rule is inflexible, a cornerstone of our commitment to reliability. These invariants, embedded within our aggregate, are more than just lines of code or business policies. They are a promise – a promise of adventure without the unexpected, of journeys that create memories, not mishaps. By ensuring that each campervan is adequately prepped and available for every booking, these rules not only keep our operations smooth but also cement our reputation as a reliable and customer-centric business. In our exploration of the campervan rental business through Event Storming, we’ve identified a multitude of individual events, commands, and policies. However, these elements gain true significance only when they are clustered together as a cohesive unit. This clustering is what forms the essence of the aggregate. It’s a conceptual realization that the isolated pieces - rentals, maintenance schedules, customer interactions - are interdependent and collectively form the core of our service. Without this unification, each element would merely exist in a vacuum, lacking context and impact. The heart of this aggregate, its root, is the campervan itself. The campervan is not just a physical entity but a nexus of various business processes and customer experiences. We selected the campervan as the aggregate root because it is the central element around which all other activities revolve. Whether it’s booking a campervan, preparing it for a customer, or scheduling its maintenance, every action directly relates to the individual campervan. This choice reflects our understanding that the campervan is the linchpin of our business model, directly influencing customer satisfaction and operational efficiency. Alberto Brandolini emphasized the "blank naming" strategy. At the initial phase, rather than rushing to assign specific names or predefined functions to aggregates, the team is encouraged to recognize them in a more open-ended manner. The Naming of Aggregates This step, strategically placed at the end of the session, is more than just a labeling exercise; it is the final act of distilling and synthesizing the insights gained throughout the event. Early in the session, it might be tempting to assign names to these Aggregates. However, this urge is resisted, as premature naming can lead to misconceptions. Names given too early might not fully capture the essence of what an Aggregate represents, as they are based on an initial, incomplete understanding of the system. Therefore, the practice of waiting until the end to name these Aggregates is not just a procedural step; it is a deliberate choice to ensure accuracy and relevance in naming. Thus, the Campervan Aggregate, with the campervan as its root, becomes a powerful tool in our system architecture, encapsulating the complexity of our operations into a manageable and coherent structure. Conclusion As we conclude Part 3 of “Demystifying Event Storming: Design Level, Identifying Aggregates,” we have navigated the intricate process of identifying aggregates, a pivotal aspect in Domain-Driven Design. This journey through the Design Level has illuminated the profound utility of Event Storming in mapping complex systems, highlighting the importance of a collaborative approach and the strategic use of the blank naming strategy. The emergence of the Campervan Aggregate in our rental business model is a testament to the effectiveness of this methodology. It underscores how well-defined aggregates can streamline system design, ensuring alignment with business objectives. The decision to name these aggregates at the end of the session, based on deep insights and understanding, has been crucial in accurately reflecting their roles within the system. Looking ahead, our series will continue to explore the depths of Event Storming. In the next installment, we will delve into identifying bounded contexts, a key concept in Domain-Driven Design that further refines our understanding of complex systems. This next phase will focus on how bounded contexts help in delineating clear boundaries within the system, facilitating better organization and more efficient communication across different parts of the system.
The Rise of the Scrum GPT Fewer Jobs for Scrum Masters, and now we see the universal Scrum GPT entering the competition: Will Scrum Masters change from essential practitioners to a niche role? The job market is currently challenging for many agile practitioners, particularly Scrum Masters. Many are looking for new opportunities, while an increasing number of organizations consider the benefits they contribute to a team’s overall success. This is not just reflected in fewer job offerings for Scrum Masters; we also observe the demand for training significantly reduced. To make the situation worse, AI has improved significantly over the last 12 months, too. Back in November 2023, OpenAI released GPTs, a hyper-customizable version of their GPT-4-based chatbot. (Please note that you need to have access to OpenAI’s paid version of ChatGPT to use GPTs.) Besides the exciting market opportunity for many people with specific knowledge, data, or content, the question is what implication this new technology will have on the job market for agile practitioners when their roles are partly based on “knowledge” now available from a machine. Scrum GPT and Scrum Masters What Is a GPT? OpenAI introduced GPTs in November 2023: “You can now create custom versions of ChatGPT that combine instructions, extra knowledge, and any combination of skills.” (Source.) Think of them as a way to make your knowledge available to others through a customized Chatbot on top of GPT-4. While you do not provide direct access to your training material to third parties, you train the LLM with your data, information, and knowledge. For example, I just completed the Scrum Anti-Patterns Guide book, and I could use its complete text to train a GPT running on top of GPT-4. (Just ignore the legal side of this for a moment.) Since OpenAI launched its GPT marketplace, many free offerings have appeared; just search for them: OpenAI GPT Marketplace So, I was excited to learn that Jeff Sutherland created such a Scrum GPT: “Exciting news for Agile enthusiasts! Introducing Scrum Sage by Jeff Sutherland - a game-changer in the world of Agile. Launched to chatGPT subscribers today under “Explore GPTs” new feature. Standish Group data reveals a startling 58% of Agile teams struggle with delays, budget issues, and dissatisfied customers. Moreover, recent surveys indicate 88% of these teams use Scrum. Why? Because of widespread misconceptions about Agile, leading to poor implementation. But don’t worry! I’m here to help turn the tide, offering genuine, expert advice straight from successful teams. Let’s transform Agile practices together!” (Source.) Of course, I contacted Jeff and asked about the training material for the Scrum GPT: “Exciting, Jeff Sutherland! How did you train the GPT, what material did you use?” Jeff revealed that they mainly used the second edition of his main book: “Stefan Wolpers Base knowledge is the new Second Edition of Scrum: The Art of Doing Twice the Work in Half the Time, First Principles in Scrum and the Scrum Papers. Until we got these documents as the GPT reference we would get a lot of nonsense from chatGPT. Its data includes recommendations from the 58% of Agile teams that fail.” (Source: See the previous link.) So, I decided to put the Jeff Sutherlands GPT to the test by comparing it to my customized standard ChatGPT. The Prompt of the Competition Instead of asking a simple question, I used the following prompt on designing a complete workshop on Product Backlog management to test Jeff Sutherland’s Scrum GPT and my customized standard ChatGPT 4.0: “I am planning to create a 6-hour Product Backlog management class in the form of cohort training. There will be 3 sessions of 2 hours each spread over 4 weeks to provide time for the class attendees to apply what they learned. Essential parts of the class will be creating alignment between the Scrum team and its stakeholders, creating alignment among team members, agile product management in general, and classic Product Backlog anti-patterns identified in the Scrum Anti-Patterns Guide book. Please create a concept for this class, comprising lectures but mainly exercises covering the previously mentioned topics.” Please note: I ran the following sessions on Sunday, January 21, 2024, from 7:10 am CET to 7:45 am CET on the model GPT-4. (This requires upgrading to ChatGPT Plus.) I did not apply Grammarly to ChatGPT’s answers. Scrum Sage by Jeff Sutherland This is Scrum Sage’s suggestion for the workshop: Quote: Based on the insights from the provided materials, here’s a concept for a 6-hour Product Backlog management class structured into three 2-hour sessions spread over 4 weeks. This format will allow participants to apply what they learn between sessions. The focus of the class will be on aligning the Scrum team with stakeholders, fostering team cohesion, understanding agile product management, and addressing Product Backlog anti-patterns. Session 1: Introduction and Creating Alignment Objective: Introduce Product Backlog management and start creating alignment between the Scrum team and stakeholders. Introduction to Product Backlog: Role and importance of Product Backlog in Scrum. The Product Owner’s responsibilities: Deciding work, owning the Backlog, and determining the order. Exercise: Creating a Hypothetical Product Backlog:: Participants create a hypothetical Product Backlog. Emphasis on refining and estimating Product Backlog items. Discussion on the importance of a Definition of Done and creating value. Aligning With Stakeholders:: Lecture on the importance of stakeholder feedback in prioritizing the Product Backlog. Interactive role-play exercise: Participants practice gathering feedback from stakeholders. Homework: Participants are tasked with drafting a Product Backlog for a hypothetical project, considering stakeholder input and product vision. Session 2: Team Alignment and Agile Product Management Objective: Enhance team alignment and understanding of agile product management. Team Alignment and Communication: Discussion on the necessity of constant dialogue between the Product Owner and the team. Activity: Team-building exercise focused on effective communication and trust. Agile Product Management: Lecture on agile principles in product management. Case study analysis of successful agile product management. Exercise: Roadmap Creation: Participants create a product roadmap based on their homework. Emphasis on realistic, incremental planning and prioritization. Homework: Modify and improve the Product Backlog and roadmap based on insights from the session. Session 3: Anti-Patterns and Continuous Improvement Objective: Identify and address common Product Backlog anti-patterns and focus on continuous improvement. Identifying Anti-Patterns: Lecture on common Product Backlog anti-patterns. Group discussion on experiences and identification of potential anti-patterns. Exercise: Anti-Pattern Solutions: Participants work in groups to propose solutions to given anti-pattern scenarios. Continuous Improvement: Emphasizing the iterative nature of Scrum and the Product Backlog. Group activity: Identifying areas for continuous improvement in their Backlog management. Conclusion: A wrap-up session summarizing key learnings, Q&A, and discussion on how to implement these practices in their work environment. Throughout the course, emphasize the empirical nature of Scrum, the importance of adaptability, and the need for continuous improvement. Encourage participants to share their experiences and learnings between sessions, fostering a collaborative learning environment. End of quote. My Customized ChatGPT 4.0 This is the suggestion of my customized ChatGPT 4.0 for the workshop: Quote: Creating a 6-hour Product Backlog management class in a cohort training format, spread over 4 weeks with three 2-hour sessions, is a great approach to ensure participants have enough time to absorb and apply the concepts. Here’s a conceptual outline for your class, focusing on creating alignment, agile product management, and addressing classic Product Backlog anti-patterns: Session 1: Introduction to Product Backlog and Stakeholder Alignment (Week 1) Lecture (30 minutes) Introduction to Product Backlog: Basics of Product Backlog management, its role in Scrum, and its importance for project success. Stakeholder Mapping: Understanding different stakeholders and their impact on the Product Backlog. Exercise (1 hour) Stakeholder Alignment Workshop: Role Play: Participants act as different stakeholders (e.g., customers, team members, business managers) and express their expectations and concerns regarding the Product Backlog. Alignment Exercise: Identify common goals and conflicting interests among stakeholders and discuss strategies to align them. Group Discussion (30 minutes) Debrief and Reflect: Share insights from the role play and alignment exercise. Discuss strategies to apply in real-world scenarios. Session 2: Team Alignment and Agile Product Management (Week 2 or 3) Lecture (30 minutes) Team Dynamics in Backlog Management: Importance of team alignment in refining and prioritizing the backlog. Principles of Agile Product Management: Overview of Agile product management and how it influences backlog prioritization. Exercise (1 hour) Backlog Refinement Simulation: Group Activity: Break into small teams, each team refines a mock Product Backlog, considering different user stories, dependencies, and stakeholder inputs. Prioritization Game: Teams prioritize the backlog items based on various criteria (value, complexity, stakeholder influence). Group Discussion (30 minutes) Debrief and Insights: Discuss challenges faced during refinement and prioritization. Share strategies and best practices. Session 3: Addressing Product Backlog Anti-Patterns (Week 4) Lecture (30 minutes) Common Product Backlog Anti-Patterns: Overview of frequent pitfalls and mistakes in Product Backlog management as identified in the Scrum Anti-Patterns Guide book. Exercise (1 hour) Anti-Pattern Identification and Resolution Workshop: Case Study Analysis: Review case studies or real-life scenarios that exhibit common anti-patterns. Problem-Solving Exercise: Groups identify anti-patterns in each case study and propose solutions or improvements. Group Discussion (30 minutes) Sharing Solutions and Best Practices: Groups present their findings and solutions. Discuss how to apply these learnings to avoid common pitfalls in their work environments. Additional Elements: Homework Assignments: Between sessions, give practical assignments where participants apply concepts to their current work. Feedback and Iteration: Encourage participants to share experiences and feedback during the next session. Interactive Tools: Use online collaboration tools for exercises and discussions (e.g., Miro, Trello). Guest Speakers: If possible, invite experienced Product Owners or Agile Coaches for Q&A sessions. Resource Kit: Provide a list of additional resources, such as articles, videos, and book excerpts for further learning. This format ensures a balance between theoretical understanding and practical application, with a strong emphasis on interactive learning and real-world application. End of quote. Comparing Two Workshop Designs The next stop was to compare both workshop designs, and I tasked both models, respectively, using the following prompt: “I want you to compare your response with the following alternative response. I am particularly interested in the differences and similarities of both approaches to designing a workshop for a Product Backlog management cohort, the advantages one design may have over the other, and finally, which of the two workshop designs you consider superior.” Find the following what Scrum Sage thinks about the workshop design of my customized ChatGPT and vice versa: Scrum Sage by Jeff Sutherland Compares the Two Workshop Designs Note: “First Design” refers to Scrum Sage’s suggestion; the “second design” to my customized ChatGPT design: Quote: The comparison between the two workshop designs for a 6-hour Product Backlog management class is as follows: Similarities: Structure and Duration: Both designs propose three 2-hour sessions spread over 4 weeks, allowing time for participants to apply what they learn. Focus Areas: Both workshops emphasize creating alignment (with stakeholders and team members), agile product management, and addressing Product Backlog anti-patterns. Interactive Exercises: Both designs incorporate practical exercises and group discussions to reinforce learning. Session Themes: Each session in both designs is dedicated to a specific theme – Introduction and alignment, Team dynamics and agile management, and Addressing anti-patterns. Differences: Session Content and Flow: The first design offers a more detailed breakdown of each session with specific exercises, such as creating a hypothetical Product Backlog, team-building exercises, and anti-pattern solution workshops. The second design focuses more on role-playing and simulation exercises, such as Stakeholder Alignment Workshop and Backlog Refinement Simulation. Methodology: The first design seems to focus more on practical applications with direct tasks like creating a Product Backlog and a product roadmap. The second design leans towards conceptual understanding first, followed by application through role plays and simulations. Tools and Resources: The second design explicitly suggests using interactive tools like Miro or Trello and includes guest speakers and a resource kit, which are not mentioned in the first design. Homework Assignments: While both designs include homework, the first design explicitly integrates the assignments into the structure of the workshop, ensuring a continuum of learning and application. Advantages: First Design: Offers a more structured approach with clear tasks and objectives for each session, which might be more beneficial for learners who prefer a hands-on, practical approach to learning. Second Design: The inclusion of role plays, simulations, and interactive tools might offer a more engaging and varied learning experience. The addition of guest speakers can provide diverse perspectives and real-world insights. Superior Design: Determining which design is superior depends on the specific needs and preferences of the attendees. If the goal is to provide a practical, hands-on experience with clear objectives and tasks, the first design might be more suitable. However, if the emphasis is on engagement, varied learning styles, and a blend of theory and practice, the second design could be more effective. Both are robust in their approach and can be effective depending on the cohort’s learning style and objectives. End of quote. My Customized ChatGPT Compares the Two Workshop Designs: Note: “Alternate design” refers to Scrum Sage’s suggestion; the “original design” to my customized ChatGPT design. Quote: Both workshop designs focus on Product Backlog management within a 6-hour class divided into three 2-hour sessions over 4 weeks. They aim to enhance participants’ understanding and skills in aligning the Scrum team with stakeholders, agile product management, and tackling Product Backlog anti-patterns. Let’s compare them based on structure, content, and pedagogical approach: Similarities: Duration and Spread: Both workshops are spread over 4 weeks with three 2-hour sessions, allowing time for practical application of concepts between sessions. Focus Areas: Both emphasize aligning the Scrum team with stakeholders, understanding agile product management, and addressing Product Backlog anti-patterns. Interactive Learning: Both designs incorporate interactive elements like exercises, role plays, and discussions to facilitate active learning. Homework Assignments: Each session in both designs ends with homework, reinforcing learning, and encouraging practical application. Differences: Session Structure: The alternative design explicitly outlines each session’s objectives, such as team alignment, stakeholder engagement, and anti-pattern identification. The original design integrates these themes throughout but doesn’t specify them session by session. Content Focus: The alternative design appears more structured in content flow, with a clear progression from introduction to advanced topics. The original design emphasizes an immersive experience, integrating theory and practice throughout the sessions without a clear linear progression. Methodology: The alternative design uses a more traditional educational approach with lectures, discussions, and exercises. The original design might lean more towards experiential learning, though this aspect is not explicitly detailed. Team Dynamics and Communication: The alternative design includes a specific team-building exercise focusing on communication and trust. The original design doesn’t specify such an exercise, but it could be inferred from the overall approach. Addressing Anti-Patterns: The alternative design has a dedicated session for anti-patterns, with practical group exercises for solutions. The original design likely integrates anti-pattern discussions throughout the course. Advantages: Alternative Design: Provides a clear, structured approach that may be easier for participants to follow. The explicit focus on different themes per session helps in building a comprehensive understanding gradually. Original Design: Likely offers a more fluid and integrated learning experience, potentially leading to deeper understanding through continuous application and reflection. Superiority: Determining which design is superior depends on the participants’ needs and learning preferences: If the participants prefer structure and clear delineation of topics, the alternative design might be more suitable. If they benefit from a more holistic and integrated learning approach, the original design could be more effective. Both designs have merit and can be effective, depending on the context and the learning style of the participants. End of quote. Conclusion Considering this brief Scrum GPT experiment, which took about 30 minutes, there are two questions: Can you use either suggestion out of the box? Which workshop design is better? The first question has a simple answer: Of course, that is not possible. You will need to work more to turn one of the workshop designs into an actionable concept. The second question is also not as trivial as it looks: The workshop design by Jeff Sutherland’s Scrum Sage offers a clear, structured practice with specific tasks and objectives per session, making it easier for participants to follow. This method is particularly effective for learners who favor a practical, hands-on learning style, as it builds comprehensive understanding by focusing on varied themes in each session. Conversely, the workshop design by my customized ChatGPT likely facilitates a more fluid, integrated learning experience that encourages deeper understanding through continuous application and reflection. Using role plays, simulations, and interactive tools enhances engagement and variety in learning, while guest speakers contribute diverse perspectives and real-world insights. Ultimately, neither workshop design can be labeled “superior” to the other; it is a question of the use case: If you start with beginners, I probably would choose Jeff Sutherland’s Scrum Sage version. For intermediate and advanced practitioners, I would go with the design of my customized ChatGPT. However, the interesting question looms: Do we need a full-time Scrum Master per team if ChatGPT, in general, and a Scrum GPT, in particular, are already so well-positioned to deliver some core parts of a Scrum Master’s work? Of course, a Scrum GPT will never be able to cover the human communication and relationship building of Scrum Masters and agile coaches. But what about many other aspects of their work, for example, workshop design? As so often, humans overestimate the impact of innovation in the short term and underestimate its importance in the long term, while GPT-5 is already in the making. At $ 20 per month, ChatGPT is a steal.