Bottom Up Problem Solving - Part II

In the previous post, I have explained how a lot of programming is simplifying big problems and solving smaller ones in a sequence of changes. In this post, I will share my thoughts about the practical significance of following that model in a software engineering environment.

This bottom-up process optimizes towards making small iterative changes and derisking releases. Small changes are desirable because they are easier to understand and evaluate for code reviewers. Smaller changes are also inherently less risky. In a safe and fast continuous deployment pipeline shorter reviews and lower risk naturally drive towards faster release cycles. Faster release cycles mean faster feedback.

Having a fast release and feedback cycle can make the process of simplifying top-down and building bottom-up much easier in light of uncertainty. There are scenarios when trying to break down the big problem top-down may:

  • not be possible; e.g. not enough knowledge about the big picture;
  • require a lot of resources; e.g. a big or expensive undertaking; or
  • the end-result may not guarantee success; e.g. it has never been tried before;

Yet the fast feedback loop enables everyone to focus on the next iterative step towards a larger goal. This is when such small changes can drive results at the same time letting engineering and product team keep enough agility.

I have worked with people who, in an environment that facilitated this, were able to consistently do 20-30 changes per week. They did it by making small changes, that got reviewed quickly and released automatically. Each of those modifications carried enough weight to make the change visible too, and at the end of the week all these small marginal gains were adding up to a lot.

As a practical example, a serverless web application I work with, had its early changes releases staged like this:

  1. Empty CloudFormation stack plugged into an automated deployment system.
  2. Deployment of cloud resources required to run a serverless app.
  3. Deployment of a “Hello, World!” app, with the beginnings of a test suite.
  4. Deployment of a non-functional login form.
  5. Deployment of authentication backend.
  6. Deployment of login implementation.
  7. Deployment of a non-functional user list.
  8. Deployment of user creation form.
  9. Deployment of an user list implementation.
  10. So on…

This happened over two days and, by the end of a week, we had a prototype of a system that was ready to be put in front of clients. This is a specific example and doesn’t carry a lot of detail past being an abstract list, but it does show that this type of bottom-up building is extremely efficient if you want to move fast. In a startup environment, where most of my experience comes from, showing results fast can be paramount for company survival. Or, less dramatically, can make a difference between signing a client today, or “reconnecting in 6 months, when you have the feature ready.”

Nor is this a new idea or specific to software engineering. I am reading Atomic Habits by James Clear and in one of the chapters he writes about building “big” habits. The gist is:

When you start a new habit, it should take less than two minutes to do. Nearly any habit can be scaled down into a two-minute version, e.g.: “Read a book before sleep” starts with “Read one page.” <…> You have to establish a habit before you optimize it. Instead of trying to engineer a perfect habit, start with an easy thing. You have to standardize before you can optimize.

This is the same principle, applied to software engineering. There should be no hesitation or feeling of inadequacy by starting small and simple. When you join a new company, with an established project or team, it is very easy to be overwhelmed by the apparent complexity of the system and the tools. This may lead to attempts to match that complexity. Resist and remember - every complex system evolved from a simple one1. Therefore it is good to imagine the end goal but always focus on the first step forward.

A non-obvious aspect of this is the need to break the parity between product features and code changes. I have worked in one or two companies, where the expectation was that one feature-request, bug report, or another task for product improvement requires one code change (in a form of a single PR), and needs to be released at once, or in a bundle with a few others. There are a few subtle problems with this approach.

First, the way the big picture is broken down by someone outside engineering organization may carry assumptions that do not match the code structure or even engineering processes. That is not to say that the product manager is doing something wrong - no. What it means is that an engineer should still be able to apply this bottom-up principle and break a task further down, if it makes sense.

For example, an “Implement a Forgot Password” feature request could become:

  1. Deploy an empty page at /forgot-password.
  2. Update database user model with reset token and timestamp.
  3. Implement, test, and deploy password reset functions for verifying token and setting a new password.
  4. Implement the form at /forgot-password and connect it to the code from step #3.

Second, even if the changes are broken down into smaller chunks, this expectation of feature-change parity may lead to bundling. Bundling is releasing related code changes all at once, even if they are individually small. It immediately increases the risk that something will go wrong. And since any individual part in the bundle can break - it may be unpredictably difficult to roll-back or roll-forward a fix.

Derisking means breaking down and releasing changes not necessarily by what the task asks an engineer to do, but breaking it down by what the developer thinks they can confidently get reviewed and released fast. In some ways, the size of the change list should represent developers’ confidence with the system and the shared understanding with the team of what constitutes a simple vs. risky change.

As a second practical example - a very underspecified task for a proof-of-concept project at work has about 60 individual code changes attached to it. The reason is that engineers at that time acknowledged the fact that it was not clear which direction the product was going and opted for a series of tiny changes, verifying their assumptions at each step. At another time, an improvement to an existing feature had about 10 changes. Although in terms of new and changed code - both instances were of about the same size.

In summary:

  • resist complexity, even when you see it;
  • have a rough idea of what you want to achieve;
  • focus on the small and simple step that would move you there.

I hope that if you ever felt intimidated by a programming exercise, library, tool or a task at work, that this post gave you enough confidence, through my practical examples, to look for ways to simplify, break it down, and look for the next small change that would move you towards your bigger target.

Good luck!



  1. In fact, this is better known as Gall’s Law, named after John Gall, author of “The Systems Bible”:

    A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over with a working simple system.

     ↩︎

Recent articles