June 7th, 2021 by Thea Maisuradze
In the previous post, we examined why and how agile software estimation goes wrong so often, and we identified three principles that must support any good estimation method:
- Reliability - An estimate must reflect the performed work within a reasonable confidence interval.
- Objectivity - The same estimate should be given for a task, regardless of who's giving the estimate.
- Consistency - Given nothing else has changed, an estimate should not change with time.
I call this the ROC principles. In this post, we'll start looking at techniques and methods we can apply to our estimation process to implement these principles. In order to properly ROC-ify our estimates, the first thing we need to grok is the difference between complex and complicated tasks.
Is The Task Complex or Complicated?
Many people conflate complexity with complicated-ness or difficulty. In everyday life, we use the terms complex and complicated interchangeably. However, when we talk about estimation, we need to distinguish between the two. A clear understanding of these two concepts and their implications is crucial for providing ROC-ing estimates.
Understanding Complexity
A complex system is one whose components interact in multiple ways and follow local rules, meaning there is no adequate generic definition of all its possible interactions. In other words, it's a system where, as it expands and more separate parts are added to it, its internal interactions increase exponentially, even to the point where its behavior cannot be modeled or predicted. Any task that requires interaction between different parts of a computer system will have a degree of complexity. A task that simply defines the display of some data on a web page will typically orchestrate interactions between a database, some application-layer component, and the browser, giving it a small but well-defined degree of complexity.
Complexity creates uncertainty. A complex task or system is, by definition, unpredictable to a certain degree. The other parent of uncertainty is ignorance. Sometimes we lack the information to assess the complexity or implications of a task. When we undertake a task that involves uncertainty, we are dealing with risk. Risk occurs when we know all possible outcomes with a certain probability for each but also when we don’t know all the possible outcomes. So complex tasks are inherently risky. Consider the previous example of the task that simply defines the display of some data on a web page. We know that the most likely outcome is that the data will be displayed as intended. We also know that there is a very small chance that a database or browser error will prevent the data from being properly displayed. The small complexity of the task creates a small but present risk.
Identifying and quantifying risk is critical when estimating. We’ll talk more about markers for identifying risk later on in this post series.
Understanding Complicated Tasks
Difficult is an adjective often used as a synonym of 'complicated.' A complicated system may or may not have multiple interacting parts, so it may or may not be complex. A complicated system involves convoluted or intricate steps that are difficult to understand or execute. Once these steps are understood, then the system becomes entirely predictable.
Difficulty implies effort. It takes time and brainpower to untangle the convoluted instructions needed to run a task or to analyze the workings of an intricate problem. But once we do put that effort in, we can minimize uncertainty and have confidence in the expected system behavior.
Consider, for instance, the Smith-Waterman algorithm. It’s a complicated algorithm that takes a lot of effort to understand and implement. However, it is not a complex algorithm. It operates on a single set of data and consists of only 4 steps. There are very few variables in it. But trying to understand the rules on which these variables operate and how the scoring, substitutions, and tracebacks are applied is no simple feat (I’m speaking from experience). There are so many subtle details one has to comprehend and so many decision points in the workflow that it can take many days to get it right. Yet, it’s a relatively simple algorithm. Once the rules and workflow are understood, we can easily and confidently predict the system’s behavior, given some initial conditions. The same cannot be said of a truly complex system.
The following diagram illustrates the implications of complexity and complicatedness
Most tasks we’ll encounter during software development will have a degree of both complexity and difficulty. Complex tasks will comprise many internal interactions, some of which will involve some complicated steps or workflows. To give realistic estimates of such tasks, we need to distinctly identify and assess how complex and how complicated they are. We can do so by estimating in 2D.
2D Estimates
So, now that we understand that complexity and difficulty are not the same, we can start looking at tasks differently when it comes to estimating them. Traditionally we estimate tasks on a linear scale. We give a task a single value, usually a story-point value, based on the task’s perceived difficulty or complexity or effort (there is no consensus or clear guidance on what a story-point represents). Let’s stop doing that. There is no scalar value that can capture the effort and risk associated with a task. As we’ve already discussed in the previous sections, the complexity of a task is associated with risk, while its difficulty or complicated-ness is related to effort. So let’s start estimating on a plane instead of a line. The following diagram illustrates the concept:
Each task we estimate is placed on the estimation plane according to both - its estimated complexity and complicated-ness (difficulty). High-difficulty tasks will be placed more towards the right side of the plane, while high-complexity ones will be placed towards the top side of the plane. We can observe that Task 3, for instance, is more difficult than Task 4 and will require more effort or time. Task 4, on the other hand, is more complex than Task 3 and involves greater risk.
The baseline task at the bottom-left part of the plane is the simplest and easiest task applicable within our domain and application. Baselining is very important in establishing objectivity and consistency in our estimates. Let’s discuss this a bit further.
Baselining
Most of the time, the baseline will be a ‘hello word!’ type of task. The baseline task serves as the sounding line for our estimates. There are three rules about the baseline:
-
No other task should be estimated at fewer points than the baseline task.
-
All other tasks must be estimated in relation to the baseline task.
-
You do not talk about the baseline. (Nah, just kidding!)
Fight-club references aside, a baseline task is like a unit of account, a bit like the US dollar, British pound, or Japanese Yen in the financial world: just like we can value assets in multiples of dollars or pounds, so we can estimate tasks in multiples of our baseline. We could, for example, say that a task is twice as complex and five times as complicated as our baseline. Just like the dollar, the baseline helps establish objectivity. Most people in the world know the value of the dollar and can value things in relation to it; everyone knows what you can or cannot buy with x amount of dollars. Similarly, all people in our team will know what the baseline task represents and will be able to estimate other tasks in relation to it.
Baselining can also help with providing consistency to our estimation. We need to adjust our baseline according to our team’s environmental factors. This is achieved by assessing several environmental factors that affect the team:
-
Domain experience: is the team experienced with the domain they’re working in? In some domains, like certain finance sectors, for example, a lack of domain knowledge can lead to mistakes and defects. If domain experience is totally lacking in your team, move the baseline task two points up the vertical (risk) axis. If domain experience is partially lacking, move the baseline task one point up the vertical axis.
-
Technology experience: Have the team members worked with the selected technologies before? If not, then expect a steep learning curve which will affect productivity and cause mistakes and bugs. If there’s a technological knowledge gap in the team, move the baseline one point in both the vertical (risk) and horizontal (effort) axis.
-
Team spirit: is the team harmonious and cohesive? Do the team members work well together? Are there new members in the team? Has there been any friction or clashes? Cohesive teams are productive teams. Unless the team is fully harmonious and everyone works well together, move the baseline one or two points along the horizontal axis.
-
Morale: is the team enthusiastic about the project and the technology? Has the company been making redundancies or applying unpopular policies? Low morale almost always correlates to low productivity. Move the baseline one point along the horizontal axis if morale isn’t great.
The following diagram shows an adjusted baseline after considering our team’s environmental factors:
In this diagram, we have assessed our team’s morale, spirit, their domain and technological skills and experience. We agreed that the team is not very experienced in the domain we are working in. This increases risk, so we moved the baseline one-up the vertical axis. We also have some new team members, so we moved the baseline one point on the horizontal axis, as it will require more team effort to onboard the new members and bring them up to speed. Our baseline is now at the (2,2) coordinates. Note how all the other tasks have been moved proportionately. The same environmental factors that affect our baseline will equally affect all other tasks too.
Adjusting the baseline according to environmental factors allows us to estimate collectively by taking into account factors that affect our whole team and not just providing our personal subjective estimate on the task. If we have to re-estimate the task sometime in the future, we will adjust the baseline according to the current environmental factors, thus ensuring our estimates are more consistent and more objective.
TL/DR;
In this post, we discussed the concepts of complexity and complicated-ness or difficulty and their implication for our estimation. We then introduced the 2D estimation method and examined the notion of adjusted baselines and environmental factors to help increase our estimation’s consistency and objectivity.
In the next and final post in our estimation series, we’ll delve more into 2D estimation and discuss how to identify and quantify risk in our tasks, among other things. Stay tuned!
Fred Heath is a software jack of all trades, living in Wales, UK. He is the author of Managing Software Requirements the Agile Way, available for purchase on Amazon.