Embracing the Journey
Last time, I made the case that software organizations need to be careful that they don’t get too emotionally attached to the goals they set for themselves — both big and small. If you find that argument compelling, then you might ask, “What are some concrete steps my organization can take to be more journey-oriented?” We certainly don’t have it all figured out, but here are some applications we have put in place to encourage journey thinking.
When building and releasing software, if you keep your destination close — just a few days down the road — it’s much easier to know enough to accurately understand and imagine it. But how do you come up with a destination that is so close? It’s tough to imagine small meaningful chunks of software that can be released. Early in my tenure at my current gig, I kept telling people: deliver a small, INVESTable chunk of what we imagine the complete solution to be — not the entire thing. Unfortunately, I’m terrible at imagining what those smaller chunks might be. Thankfully, people much more talented than I were able to come up with usable, iterative bits of the entire solution; and over time, everyone has gotten much better at doing so. Diving in on how to do that is beyond our scope here, but if you’re curious to dig further, we found this heuristic approach to be helpful.
Operating this way leaves plenty of room to learn and pivot along the journey. Any missteps are small failures — not big ones.
This will be a battle with destination thinkers. It will feel like a waste of time, and indeed it would be if we actually knew what the destination was (which I would claim we don’t). As the Agile gurus will tell us, no one ever said Agile was more efficient; it’s just more effective at producing the right thing.
Even if you’re willing to give up on the idea that you can know what your solution should be, engineers are still tempted to say that the architecture has to be established. That leads to a lot of upfront designing and anticipation when, of course, you don’t know all the things that need to be cared for and often spend time caring for things that don’t actually matter. As Bob Newhart would say, “Stop it!” Build the architecture necessary for the iterative chunk you are delivering, nothing more. Down the road, when you are delivering a bit of functionality that makes you realize your original architecture was off, re-factor. Sure, it’s a little more effort then, but you’re only incurring the cost when you need it rather than incurring all the costs for the things you never need.
A former manager of mine (pulling on his experience playing chess, I believe), used to call this “accumulating improvements”. Just always make your position better in the code. In the old days before automated tests and continuous deployment, this wasn’t workable because refactoring incurred so much additional testing. Now that’s just not the case, but unfortunately, many people behave as if it were.
There’s an entertaining story that illustrates this point. I’ll quote political writer Kevin Williamson’s version of it:
One of my favorite political fables concerns Dwight D. Eisenhower and his tenure as president of Columbia University. The campus was undergoing an expansion, and Ike was presented with two very different plans for laying out new sidewalks. The architects were irreconcilable, each insisting that his plan was the only way to go and that the other guy had it all wrong. [Sound familiar?] Ike, sensible fellow that he was, had grass planted instead, telling the architects to wait a year and see where the students trod paths in the turf, and then to put the sidewalks there.
In other words, see what wins in the marketplace of architectures (or standards or coding patterns or whatever) and then formalize it. Abstract debating does very little to prove a point. Seeing the code or pattern in action and using a few alternatives is much more effective at discovering what’s best.
Make It Safe To Fail
Learning along the journey means that you’re making some wrong decisions (i.e., failing). Therefore, organizationally, that has to be acceptable — celebrated, even. We have a variety of activities to make space for failure which I have covered in prior articles: from our “Fail Whale” ceremony to our focus on measuring the Mean Time To Resolution. I would encourage you to read those articles if you haven’t already.
Stop Writing Specifications
I think most software organizations have evolved past writing formal specifications, but many still do them in some other disguised format. Maybe if we call it an “epic”, no one will notice that it’s actually a specification.
My company has actually changed our language in this space. The cards on our top level, Agile board are simply called “Problems”. The best ones are just a sentence or two description of what the problem is. We avoid writing out an elaborate plan for how the problem should be solved. We leave that to the team doing the work to discover the best solution.
Once again, this is a difficult shift. Many Product Managers see this as a threat to their existence. It’s far from that. Rather than being list makers and note takers, they can worry about things like establishing good customer relationships and building industry awareness and notoriety. It’s similar to the transformation for testers in a world of test automation. They don’t have to do the boring stuff anymore — test checklists. They can focus on the fun stuff — exploration.
Our Agile Coach constantly quotes Marty Cagan to us, “Fall in love with the problem, not with the solution.” That way when you discover that your solution was off, you don’t feel bad dumping it. You were never in love with it anyway.
The other anti-pattern that often emerges with specifications is that we suddenly all turn into lawyers. We write up a big contract and try to see all the loopholes and alternatives and then hold one another to it. When the person that designed the solution discovers that it was actually not the greatest way to solve the problem, he starts blaming the engineers. Then the engineers point to the contract and say, “We did exactly what you told us.” What a terrible way to live! How about if we all just try to solve the problem together and embrace the fact that it might take a few iterations?
Stop Trying To Anticipate Everything
In my last article, I made the point that engineers are valued for their ability to anticipate. That results in a cultural current that we have to swim against: the idea that we have to build software that anticipates all possible (no matter how probable) eventualities. What if we took a different tack? What if we considered only solving for the scenarios that will actually occur?
A simple scenario: a developer needs to implement an API that returns a list of items that vary in number. Everyone knows that you should make that list page-able because otherwise the payload might be too large. But will it? Maybe it won’t. Why put in the extra effort (assuming it is extra effort) unless you know it’s necessary. If down the road you discover that it does need the ability to page, you’ll know a lot more about what’s going on and have real world scenarios to make sure you’re doing it right. Why expend the effort if you aren’t certain it’s needed.
This is a good opportunity to invoke another president — the wildly underrated Calvin Coolidge: “If you see ten troubles coming down the road, you can be sure that nine will run into the ditch before they reach you.” Wise words. Stop building software as if all ten troubles will occur. If one of those troubles does show up later and you’re doing things right, you should be able to easily and quickly resolve it.
Stop Treating Code As An Academic Exercise
On a related topic, I have very little patience for the intellectual chess match that often goes on during code reviews. There’s this game we play where we try to call out some corner case that wasn’t accounted for. If there is a non-zero chance it will occur, checkmate! Stop. The point isn’t to prove you’re smarter. The point is to build a solution to a customer’s problem. To my earlier example, if we only return 1000 records when there are 1001, is it really the end of the world? Not necessarily. It all depends on the scenario, but don’t fall into the trap of assuming it’s a problem.
As with other items in this list of applications, this can be a tough cultural challenge. As I mentioned in a prior article, in my organization we invoke some useful language when nitpicking like this happens: “Are you just sharing information?” If nothing else, it should cause us to pause and question: does that scenario really matter in this case?
Don’t Set Deadlines
Set aside the fact that deadlines are bad for other reasons. The only way to have any chance of setting a timeline is if you completely understand the scope. Without that (since we can’t know the destination), you can’t possibly set a reasonable timeline. Even if you knew the scope, the Planning Fallacy makes it clear that we’re very bad at setting timelines, so let’s stick with what we can be successful at: continuous, iterative delivery.
Treat Everything As An Experiment
Forget about software for a moment. What about everything else we do at work? All of our process ideas should be treated as experiments for all the same reasons we’ve discussed here. We had the wacky idea a number of months back that we would have a zero bug backlog. Either bugs get fixed right when we find them, or we decide we simply aren’t going to fix them. Ever. (Rather than fooling people into thinking that magically one day we will make time to go fix their pet bugs.) We approached that as an experiment: “Let’s try it and see what happens.” That experiment has turned out successfully — so much so that we have attached a few other items (security vulnerabilities and code coverage deficits) to the process by treating them the same as bugs.
In another experiment, we needed to solve a problem where our Data Operations team occasionally needed to be able to build product. We decided to experiment with a “developer loaner” program. Our first iteration of that didn’t go quite as planned, but we learned a lot in the process and will adapt how we approach it next time.
Your greatest ally in discovering the right destination is the person whose problem you’re trying to solve — your customer. It’s an Agile trope, but it’s so very true. You might come up with the coolest idea in the world, but if your customer won’t use it, you wasted time. They’re the ultimate litmus test. Now this doesn’t just mean, ask your customers what they want. After all, as Henry Ford said, “If I had asked people what they wanted, they would have said faster horses.” There has to be room for innovative ideas, but make sure you’re ultimately solving your customers’ problems (not some problem you imagine they have) and give them an opportunity to help steer.
To be clear, this doesn’t just mean you should have a conversation with them. Get new code in front of them as soon and as frequently as possible.
The one aspirational metric we track in our organization is Delivery Lead Time. (See Accelerate for more info on the topic.) That’s basically the time between when a team picks up a “Problem” and when a customer sees the first bit of product solving the Problem. We aspire for that to be as short as possible because that means we’re getting feedback quicker thereby keeping our failures smaller. We average around 2.6 weeks, but we’ve done it in as little as 3 days!
Hire The Right People
I suppose this goes without saying, but if you’re going to operate this way, you have to have people that embrace it. Weeding out the wrong kinds of folks during the interview process is critical. I’m increasingly of a mind that destination thinking is a fatal flaw. We have a few indicators we look for when interviewing.
One of our favorite questions is, “If you can get a project done on time and imperfect vs. late and perfect, what do you choose?” If it isn’t obvious yet, the right answer is imperfect and on time. Perfect implies you know the destination, and besides, software isn’t ever perfect. I recently had one candidate respond with a great answer to this question. He quoted Timur the Lame, “It is better to be on hand with ten men than absent with ten thousand.”
I also speculate that people who love playing chess and agonizing for 30 minutes over a single move are probably not the kind of people that would thrive in an environment like I’m describing. I have seen this hold true in multiple cases. I started folding that question in during our interview process: “Do you like to play chess?” Once, a particularly promising candidate answered that indeed he did like playing chess. I was genuinely worried that he might be frustrated by our culture, but he quickly followed up with, “I play speed chess”. I had never thought about it before, but that was perfect. He didn’t agonize over getting one particular move just right. Because of his answer, we now have a new maxim: we play speed chess — not slow chess.
A word of warning: don’t treat any interview question as a litmus test. There are just too many variables. It’s absolutely possible that someone would answer these questions “incorrectly”, but still be a great fit. As always, it takes a lot of digging to really understand; but that’s probably a topic for another post. Sometimes, the most effective thing you can do is explain the culture (e.g., “we play speed chess around here”) and allow the prospective employee to make a decision accordingly.
On the flip side, if there are already people in the organization that don’t think this way, the practices enumerated here will make them fairly miserable. They will likely find new work of their own accord.
Don’t Skimp on Public Relations
Finally, don’t forget that your software team doesn’t exist in a silo. It rubs up against other parts of the organization. Minimizing friction at those interface points is a high priority for a software leader. I have had more than one conversation around a process that crosses organizational lines where people want to know the complete definition of the process up front. It takes a bit of work to convince them to come along for the ride and “discover” what the process should be — or to embrace the idea that the proposed process is just a first iteration until we discover what adjustments need to be made.
Alright, with all those applications behind us, this is a good time to revisit our organizational goals. Remember, my original claim is that all the principles I outline in this series of posts should drive toward at least one of the stated goals. I would claim that “journey thinking” feeds the following organizational goals:
- Antifragile: Operating in this way keeps failures small and allows the system to adapt when failures do occur.
- Nimble: Similarly, an organization can quickly discover and pivot to meet newly discovered needs. Nothing makes an organization less nimble than saying that we can’t move on to a different, better goal because we’re busy finishing out our current goal.
- Productive: Stuff gets built rather than talked about, analyzed, and designed; and you can be confident that it’s the right stuff, not just what someone imagined to be right.
- Innovative: It’s easier to innovate on the fly in service of solving a problem. There’s no single innovator deciding the solutions. Everyone is involved in solution and framework design.
Once again, pretty good bang for the buck: four out of five. Are there any practices that your organization has that fosters journey thinking? Leave your comments below.
Curious to Learn More?
Check out Principles for Leading Software Teams: A Guide for related articles and reference materials.