The true cost of adding new features

Last week I saw an essay from Kris Gale that struck a chord with me:

For years, the two things that most frustrated me to hear from product managers were “how hard would it be…” and “can’t you just…” It took me quite a while to figure out why I had such a strong, visceral reaction to these phrases. The answer seems obvious now: The work of implementing a feature initially is often a tiny fraction of the work to support that feature over the lifetime of a product, and yes, we can “just” code any logic someone dreams up. What might take two weeks right now adds a marginal cost to every engineering project we’ll take on in this product in the future. In fact, I’d argue that the initial time spent implementing a feature is one of the least interesting data points to consider when weighing the cost and benefit of a feature.

Every feature you add leads to more possible bugs, as the features interact with each other. (More possible/potential bugs in every case, averages out over all the cases: More actual bugs).

It also makes it subsequently take longer to do further development, as there is more code for the developer to wrap their heads around: “The first time you open a new section of code, you need to orient yourself. The fewer things the code does, the quicker this process.”

Users also pay a complexity cost in the form of cognitive overhead when you add things to your product. You might think you’ve found a way to get complexity for free by making these hidden or advanced features, but you’re fooling yourself. I can’t count the times I’ve somehow triggered a hidden feature in software, only to spend an obnoxious amount of time trying to figure out what happened or how to revert it.

More features also means it’s harder to create a UI that encompasses all these features, while still being comprehensible: Which means that developers need to spend more time on designing the UI, and that their chances of succesfully building an easy and enjoyable UI go down.

And this increase is not just additive:  When adding the 10th (or 100th) feature to an already existing software, the UI redesign work takes much more time than if it was just the 1st or 2nd feature. (I don’t really think ‘features’ are neccesarily indivisibly countable like that, but we have to talk about it somehow).  Because now you have to figure out how to make the UI comprehensible with 10 or 100 features, not just figure out a UI for this new feature.  And that goes for the ‘back end’ development work too, is the point — the more you stuff into a piece of software, the higher the ‘incremental’ cost of stuffing more in — and the more time spent responding to bugs and improperly working software (instead doing new development at all all), when the software has more metaphorical moving parts rubbing against each other.

A pathological example of complexity cost that needs to be driven out of engineering teams entirely is a strange habit I’ve seen of people carrying bugs forward as features. Here’s how this usually works: Behavior in cases not defined in the original requirement ends up working one way or another based on incidental implementation decisions. Eventually, users find and adapt to these things and will report bugs when the logic changes, despite the fact that nobody intentionally designed the behavior the users expect into the system.

I bet most of my readers in library technology have seen that too. Why is that bad? Well, because it adds complexity in maintenance, what was just an accident is now a ‘feature’ that you have to make sure keeps working through all changes, and have to account for it’s interactions with any new changes.

Yep, as Gale writes, we developers are constantly faced with “Can’t you just…”, “How hard would it be?”   And the problem is not how long it will take to implement the latest “It can’t be that hard, can it?” being asked for — it might indeed not be that time consuming (although sometimes it is more time-consuming than the stakeholder asking can possibly imagine, other times it isn’t, but either way….) The problem is that if you throw in everything that gets asked for like this, you wind up with bad software. Software which works poorly, has lots of bugs, is very time-consuming to maintain, is increasingly difficult to add new features to, and is confusing for users to use too. (If your software is used by some patrons at least weekly, then users who have been with it all along may have had time to adjust to each new feature without cognitive overload; but every new user, or every user that only uses the software monthly…. is overwhelmed with the complexity of it).

So it’s part of our job to push back on these requests, to try to keep our software simple and manageable — not just to try to please the stakeholders/customers by showing that we can indeed implement every “How hard can it be?” that they ask for.

(And, note well, this applies to our library-sector software vendors vis-a-vis we library customers too — if our vendors don’t push back, and they usually don’t, if our vendors just implement every “Can’t you just” — then we end up with monstrous vendor-supplied software too. Which indeed we often have.)

The scenario Gale describes is very familiar to us in libraries.  The first part of the solution is just learning how to talk about these things with stakeholders, which is perhaps this blog post (and Gale’s original essay which I encourage you to read) might help us move towards.

But Gale also proposes some particular strategies and mechanisms to try and resist software complexity. I’m less sure of how well some of these apply to the library organization, but they might:

On the product side, your best tool for eliminating complexity cost is data. Studies show that most product ideas fail to show value, and many even remove value. If you methodically test the impact of each change you make, you can discard those that are negative, but equally relevant to complexity cost, also discard those that are neutral. A change that doesn’t hurt the key performance indicators still hurts the product because it adds complexity debt that must be paid on all future projects.

In the library context, we can’t simply measure ‘value’ as how many things we sell, how much money we bring in. We’re still learning how to use data to measure ‘value’ in the library context — and it will probably always be somewhat subjective, since different people have different ideas of what outcomes to prioritize, of what ‘value’ even means. But we all know we’ve got to bet better at data-driven decision-making anyway, and I think Gale is right that it’s the best tool in the fight against software complexity — but in our context, it’s certianly not an immediate magic bullet solution.

…Often, the best thing for a product is taking something away from it.

Beyond establishing the impact of changes, data also helps you to shed complexity over time. All of your features should be logging their usage, and this usage must be made transparent to everyone. Show your team where to find this data, and given them the means to ask how to interpret it if it’s unclear.

That sounds great. Unfortunately, libraries have been trying to achieve such things for years, with limited success.  It could be the topic for another post what has gone wrong with many large libraries “analytic data warehouse” or “library dashboard” projects, and what to do instead (but it would start with the word ‘over-engineering’, which is not unrelated from discussions of complexity cost.)

At Yammer, where data is collected on most every feature and is made available to everyone, I’ve repeatedly seen this pattern: A developer is working in an area of the code and discovers that it will take more than a trivial amount of time to carry a feature forward in a refactor or other change. Instead of blindly taking on that work, he or she will run reports against the usage data to find out whether a given feature is often used. That data can then be run past the product managers who can help decide whether it’s sensible to just drop the feature. This can also save time before something even becomes a proposal, because product managers will start by asking how well-used is a feature area they intend to enhance. Many things that seemed like great ideas are revealed to be wastes of time, and complexity is saved.

That does sound nice, doesn’t it?  The thing is, no matter what the data says, some stakeholders will still want it. Especially if we’re not just talking about possibly veto’ing new features, but actually removing existing features?  You know how much sturm-and-drang even the suggestion of such a thing causes, right?

Maybe in Gale’s context, if only a tiny minority of users use something, it’ll be more or less self-evident that it’s not something that should be ‘carried forward’; but not so much in our non-profit, value-driven context. And if some stakeholder wants it, the others, in our generally consensus-driven conflict-averse organizations, will say, “What’s the harm? Can’t you just add it, or keep it there?  Okay, so only a tiny minority of people use this feature, but it doesn’t hurt,  does it?  That minority use might be incredibly valuable to our mission,  and it’s important we don’t leave out the minority.  How hard could  it be to add it in anyway? Can’t you just?  What’s the harm?”

The harm is in complexity.  I’m not saying we should never provide any features or services that are only used by minorities; definitely we should.  But we’ve got to recognize that we can’t add everything that anyone ever suggests or wants — even if  we actually can, even if ‘everything’ only takes a few weeks to add, the problem is the complexity costs down the line.  We (our organizations decision-making processes) have got to learn to balance these complexity costs against true value, however we may measure or account for value.

Complexity cost is the debt you accrue by complicating features or technology in order to solve problems. An application that does twenty things is more difficult to refactor than an application that does one thing, so changes to its code will take longer. Sometimes complexity is a necessary cost, but only organizations that fully internalize the concept can hope to prevent runaway spending in this area.

update 9am ET Tuesday July 9. I’m actually going to add in a quote of a comment from the article cited, because I find it very worth thinking upon in our library organizational contexts:

PMs argue with developers about the worth of a feature to the user vs the technical debt it will incur over time. To an extent, this is inevitable and healthy and naturally emerges from the perspectives of their different areas of expertise.

This gets nasty when the … “organizational debt” … has not been paid. It’s never going to work unless the engineers and the PMs and the marketing people and the execs have a similar understanding of why their product is valuable to their customers — how it is used, who uses it, and why it’s better than the competition. Creating this shared understanding among such different sorts of people is, in many ways, more difficult than building the product. If you get it right (using data, persuasion, and leadership) then the unhealthy friction disappears — worthwhile technical debt becomes a worthy challenge and a heroic contribution. Worthless technical debt is a bad idea and dumb features are excised from the project. And… EVERYBODY AGREES on these decisions because they fit into the shared mental model of the value of the product.

The less work the organization puts into a shared vision, the greater the friction between groups and the worse the final decisions become.

8 thoughts on “The true cost of adding new features

  1. This is really interesting, Jonathan. We have more than one tool that suffers due to complexity issues caused by a slow accrual of enhancements. The flow for who can request what and how in our catalog has become so complicated that it’s impossible to remember and extremely difficult to flow chart. And yet, we never seem to be able to convince ourselves that there would be benefits to simplifying it. It really is the convincing that is hard….in many things I have stopped gathering / analyzing statistics on an ongoing basis b/c ANY usage at all, even very minimal, means that no one wants to drop a feature. I wish I could find a good way to show (data-wise) that sometimes the “best thing for a product is taking something away from it.”

  2. Thanks Emily.

    I think of you guys at NCSU as being some of the leaders among libraries at data and data-driven decision-making…. so it’s kind of dismaying to hear your experience there with areas where it’s not worthwhile to bother with the data, because it won’t effect decision-making in any reasonable way. But I am not surprised, that is my experience too.

    I wonder if it’s not really that any usage at all means that “no one wants to drop a feature” — it’s that any usage at all means that there’s _someone_ who doesn’t want to drop the feature; and if there’s anyone at all that doesn’t want to drop a feature, then in our consensus-driven conflict-averse organizational culture, that means others will go along with keeping it to keep that someone happy (and to keep the 5 people that use or want the feature happy): “What does it hurt?”

    The only solution is probably active leadership in our organizations, that understands the cost of complexity, that we can’t be all things to all people (at least not and do any of it well), and that somehow imparts this to the organization and its behavior. I have absolutely no idea how we get there. I worry that, sadly, we probably won’t, and the future of libraries is not bright.

  3. Here’s one comment from the article I cited, that’s worth reading. http://firstround.com/article/The-one-cost-engineers-and-product-managers-dont-consider#comment-937226431

    The poster argues that leadership and shared vision on organizational goals is what makes it possible to make decisions about what complexity is justified and what isn’t. This makes sense, and most of our library organizations are probably short on at least one if not both of those things.

    The less work the organization puts into a shared vision, the greater the friction between groups and the worse the final decisions become.

  4. Excellent post Jonathan.

    At my institution, we tend to manage software by committee. With the committee approach, we add a 5-15 headed PM to the shared vision equation, which given the already problematic nature determining a library product’s value, makes it very difficult to remove features/complexity (e.g. A-Z lists!?).

    I’d like to see us move to a single product manager model, where that individual can work with stakeholders, other PMs and developers to make user/data driven decisions about our software. I can dream, right?

  5. Thanks Scot; I think your experience is common, and I think I agree with you that having a single empowered product manager would be a step in the right direction.

  6. Really interesting post. I first started to appreciate this problem fairly recently, with my ongoing work on the ETD project. Your post fleshes it out in my mind in a really convincing way. So the next time we’re both in a meeting and you feel like you’re getting “can’t you just” requests that will cause undue complexity, make this argument. I’ll at least understand it, and I may even agree with you, depending on the circumstances.

    There’s another factor that I’ve seen in libraries that I think contributes to this: project/product owners (for lack of a better term) who don’t have a lot of guidance or training on defining requirements. By this, I’m referring to the people outside of Systems, generally librarians, who are often the drivers behind projects, whether it’s because of their subject expertise, their job functions, their positions in the organization or whatever. The first time I was asked to come up with functional requirements for something, I was completely panicked and felt unprepared. And this was coming from someone who actually studied a bit of HCI at an I-School. I can only imagine what other librarians feel when asked to do this.

    I think we could get a lot of benefit out of teaching more people in the library (myself included!) how to systematically and logically define project/product requirements, and then how to document and communicate them in a standardized format. I think we’d see stronger projects, more appreciation of the development cycle and Systems’ work, and maybe (just maybe) fewer “can’t you just” feature requests piling on as projects progress.

  7. Thanks Christie. Personally, I definitely don’t think defining all the ‘requirements’ in advance, or even requirements defining in general, is the answer. I think iterative ‘agile’ development, in partnership between stakeholders and developers, is a better way to go. It’s just that everyone involved needs a focus on what’s really needed to succeed, and not everything that seems like it might be nice. But you can’t neccesarily know exactly what that is in advance, before you start and see a first version. Trying to develop all the requirements up front is, in my opinion and experience, in fact counter-productive to developing a razor sharp focus on what’s really needed to succeed.

Leave a comment