Contents

Managing complexity and uncertainty in Software Development: know when to press pause

Michał Ostruszka

12 Apr 2024.7 minutes read

Managing complexity and uncertainty in Software Development: know when to press pause webp image

Whenever there is a new feature or entirely new system to be introduced, we usually expect the task to be solved by writing new or changing existing software to support it. It is indeed true in most cases - that’s what we usually do: we build software that meets business needs. Sometimes, though, it is priceless to come to the point where we collectively decide that there is either no software needed (or not that much) or that it’s too expensive or too risky to build a given thing at the moment. Maybe things that look easy and simple from the surface turn out to be complex and complicated not only from the technical point of view but also from a business perspective. It may be better to postpone a given feature until we have more data and a clearer understanding to make an informed decision and focus on the other ones in the backlog that are better defined and discovered and also have a high impact on the product as a whole.

In this article I’ll show a case from one of the projects where that exact scenario was at play. Long story short, the business decided to put a long-requested and discussed feature on hold because of what was apparent during discovery sessions conducted as stripped down versions of Big Picture and Process Level Event Storming.

Check all the articles from the series:

Discover problems early and build the right thing

We all know that the further in the process of building a system or feature, the harder it is to correct it and the more expensive the operation itself is, both in terms of time and real money.

It’s always better to discover an issue in code while in development than getting to know about it when it manifests itself as a bug in production, impacting your users. Same with system design decisions, it’s better to anticipate their shortcomings early when designing or building rather than discovering their critical limitations when already shipped.

You may see a pattern here, and there is, in fact, one. If we go back and zoom out a bit to the level of a feature or product, it becomes clear that it’s better to discover potential gaps, risks and uncertainties early before we make a decision to go with building a thing and allocating teams, time and resources than to go wild, driven by a gut feeling, just to figure out later in the development process that things were not as easy and simple as we thought they were before.

Sometimes, it’s worth taking the risk, and it may very much pay off to go fast, but equally often, it turns out these decisions become really costly and lead to building wrong things that either have to be rebuilt or get abandoned because they don’t perform as expected.

The project the story is about was an already running system for which a bunch of new, ground-breaking features were discussed. Among them was a feature (or new “product” offering) that conceptually touched multiple areas of this existing system by making use of and orchestrating some of its parts. It was a system from a financial domain operating with real money, and the feature was highly anticipated by both users and product people. The thing was that while conceptually the feature was simple to understand for end users and relatively simple from a product perspective, due to the fact that it was bound to touch multiple areas, it was quite complex on a lower level and in terms of potential implementation.

Data backed decisions

data backed decisions

We conducted countless meetings with product people and other teams, asked and answered lots of questions verifying and backing our understanding etc. but we still weren’t sure we’re all on the same page and whether the business was aware of the risks related to complexity, timeline and all that. And we really wanted to have that confirmation as we knew it’s gonna be a long ride if we committed to it.

Because scheduling ad-hoc meetings and just throwing questions and answers at each other didn’t feel right and didn’t bring much value we decided to jump into a longer session that turned out to be roughly modeled after Event Storming - we took only the parts we felt we needed for that. What we wanted to achieve was to have everyone agree on the process, its steps and details of execution of each as well as potential non-happy path cases.

But we didn’t start from scratch. Instead, we laid down foundations (in the form of a Miro whiteboard) with flows picturing our understanding of the feature at the moment. We did that to have something to work on rather than spending time rebuilding it from scratch, especially when we knew parts of the process were already well discovered.

As for the attendees, we chose to invite sponsors, product owners, and people from the accounting and legal departments. The last two may seem surprising at first, but keep in mind that it was a system in a financial domain so everything had to be done within strict regulations, both fiscally and legally-wise.

During the intense, few hours long workshop we went through the entire process bit by bit, discussing each and raising any concerns and questions to and from legal, accounting, product and engineering.

By the end of the workshop, we had our initial board augmented with lots of notes, questions, and points about which we weren’t collectively sure. Going through the process in detail revealed spots with slightly different understandings of some concepts and the ways things were working in other areas we had to touch. It even turned out, while analyzing possible paths, that things that were considered ok and feasible before became flagged as troublesome by legal and accounting due to some restrictions we had to respect, so as a result different approaches had to be taken.

While we finally arrived at the point where we had the entire process mapped and everyone was on the same page, it also became apparent that it was even more complex than we all thought and that there were a lot of “hot spots” raised with the questions we didn’t have a good answer for.

It may sound depressing that starting the workshops hoping to get a common understanding and more data to start the development activities, we ended up with more questions instead, and the feature was more complicated than at the beginning. However, if you zoom out and look at it from a higher level, we’ve just got a lot more data to help us make an informed decision. We aligned our understanding of the feature, found where we were right and where we were wrong, discovered a few opportunities to simplify things and a lot of new limitations and complexities we weren’t aware of before.

As a result of that workshop, granted the amount of information they had in hand, the product team decided to put the feature on hold, assessing its cost, complexity, timeframe, and overall risk in that form to be too high. Instead, they started researching alternative approaches that would be less complex but could help validate the overall idea faster and with less cost and risk, still obeying all the rules.

Summary

While it’s probably not the ideal outcome that business and product owners usually want to see after such a workshop, it’s important to notice that we actually saved a lot of time, effort, and money and probably added complexity just because of a fact we didn’t commit to that feature. All this money, time, teams, and resources could have been streamlined into another feature on the product that levels up our business’ competitive advantage.

So, while it’s usually about building software, it’s sometimes good to spend some long hours thoroughly workshopping the problem in some structured way with all the involved parties to save us from actually building software that would not clearly meet the expectations and all the requirements.

Reviewed by: Rafał Maciak, Mateusz Palichleb

Check all the articles from the series:

Blog Comments powered by Disqus.