~ By Kevin M. Smith
Have you heard the news? "60% of all software development projects fail to meet their goals."
Of course you've heard this. Everyone has heard this nugget of wisdom. It starts off presentations, it's used in consulting pitches, software integrators put it in their marketing materials, and IT departments promise it won't happen to them (or you). Here's the problem: it's probably wrong. I believe that, in fact, closer to 80% of enterprise software development projects fail to meet goals. The key is, it is a specific type of software project that nearly always fails. The type of development project that nearly always fails is the "old school" waterfall-type project. The kind that starts out with requirements crafted in excruciating detail, progresses to multiple layers of sign-off, is developed in several phases, each with their own system, unit, and user acceptance testing, and eventually finishes with a final result that doesn't fit the needs of a business that has long since moved on. Over the years I've seen software that was released that no longer fits an evolved business model, software that missed huge, key requirements, and software that was released just in time for an acquisition that changed the entire business environment.
When 60% of all process redesign projects fail, how can you improve your odds while simultaneously accelerating results? By using "agile process design" techniques adapted from the software development industry.
Whew. I'm frustrated just thinking about it. Luckily, the software development industry (mostly) figured out that this was a problem quite a while ago. Most successful projects today, especially externally facing consumer projects, follow a very different trajectory than the development projects of ten or even five years ago, emphasising tighter contact with the customer, faster development cycles, and the testing of smaller chunks of code.
So what does this have to do with business process design?
Unfortunately, many business process redesign efforts make those old-school enterprise software projects look like Olympic champions by comparison. Unlike their software development counterparts, most practitioners of "process redesign" have not been so eager to bring their methods into the 21st century. In fact, while software design is largely light-years beyond where it was in the early 1990s, process design, for the most part, has changed very little. The practices learned many years ago a largely still followed:
And surprise, like the outmoded techniques for software design, the process design projects conducted in this manner also have an extremely high "failed to achieve results" rate, even worse than for IT projects in my experience. I speak from experience, this is exactly the way we used to perform process redesign work in the past. Redesigning a process using this "technique" was tedious and frustrating, both for us and for our clients. And, it was tough to achieve the desired result.
But it doesn't have to be this way?
Process redesign projects don't have to be lumbering, slow, painful exercises that rarely succeed in achieving their goals. By learning the hard-won lessons of software developers, you can dramatically increase your chances for success in your process redesign project.
When software development moved past traditional waterfall-style development, a new way of thinking emerged called "Agile Development." Agile development stresses speed over perfection, rapid development of small bits of functionality, and testing of all deployed code. How can this be used for business process improvement? Here are three of the main "agile" concepts and how you can use them to improve processes more rapidly and with a much higher success rate:
One of my favourite phrases is "the perfect is the enemy of the good," and nowhere is this more true in the design of business processes. In the past, businesses undergoing process redesign, whether they called it TQM, BPR, or Six Sigma, all made a similar mistake. They took far too long to develop the process, hoping for a "perfect" final design that met all objectives and avoided all constraints. As someone who has fallen prey to this seductive path myself, I can tell you with 100% certainty that there is no "perfect process" waiting around the corner, there is no "magic bullet," there is no single "correct solution." The process that is actually deployed and is actually in use is almost always better than that "perfect" process that exists only on a Visio diagram hanging on the wall. Business needs and goals change so quickly these days that you simply cannot afford to spend months designing the ultimate business process. By taking an extended period of time to develop our business processes, we risk a final product that was "perfect" for the situation that existed several months ago, but useless in today's environment.
So how do we reconcile the need to improve processes with the need to move quickly and get something that improves the situation up and running? One solution is called the "minimum viable process" or MVP. The concept is simple: design the simplest, most basic process that will get the job done and iterate from there. Ok…So what does that mean? It means that you dispose of just about everything that isn't directly related to delivering the output of the process until you can prove that without the pieces that are left, the process simply cannot function. It means that you design the process without the multiple re-work, validation, approval, and wait state loops that dominate most processes today. Treat each process checkpoint or approval state as a design failure, a process step that exists only because the process itself is inherently flawed in even needing a checkpoint and try to design that step away. Obviously you won't be able to eliminate every single check & balance step in your process, but minimise them and see what happens. The key with the MVP design is that you need to get a new process out, up, and running as quickly as possible to test its performance in the real world. Those super-complex, "perfect" processes will need to reach the real-world stage at some point, wouldn't you rather have spent two-thirds less time in process design when you find out that your process has major flaws that must be corrected? Use the MVP as your initial test platform to challenge your assumptions and ideas about the new way of doing work. Then, use the next concept, Continuous Deployment, to make the process better fit the goals of the business.
Any process that you design, whether you spend days, weeks, or months building it, will have problems. You can count on it. I've designed and implemented many new processes over the past 15 or so years, and I have yet to see a single process that, once "in the wild," didn't have to change to some degree. With this being the situation, the key to a successful process design implementation is the pace at which you are able to effectively change the process design in response to the issues that you identify. Often, organisations take an "implement once and forget it" approach and unfortunately this results in an overall poor redesign result (part of that 60%). You have to find the process flaws and fix them quickly.
So how do you remedy this situation, recognise issues with processes and make changes that will better meet the design goals? The best practice for this is called "continuous deployment" and has grown in popularity in the software development community over the past few years. Here's how it works in the software world:
This all happens very quickly. In fact, one of the leading advocates of continuous deployment, Eric Ries, talks about how his company would deploy commercial software to the customer base multiple times per day. He stated that if each engineer didn't deploy at least every few days, it meant that something was wrong. You can make the same continuous development & deployment principle work for you when redesigning business processes. Adopt the philosophy that every day during the design cycle, something, anything, must be "shipped." It could be a new form for ordering, a prototype of an online database for tracking customer data, or a change to your CRM tool. The key is that you release constantly and learn from what happens. Think small frequent changes, not big delayed changes.
By now you might be thinking "Wait - we can't do that. What if we get it wrong? We need to perform testing/cost-benefit-analysis/executive review/financial review/legal approval/(insert committee here) review before we do anything. We could hurt the business." I don't believe that for a second. The potential for having a small "release" of a business process change, one that you monitor very closely to observe the results, damaging the business irreparably before you see the problem and release a process fix is very low. In fact, I would argue that these small process releases are much easier to monitor and problems are far easier to detect that when you perform one massive release at the end of a process redesign. World-leading design firm IDEO calls the concept of converting risk into smaller, manageable pieces "risk chunking" and uses it to ensure that their new product designs aren't an "all or nothing" proposition. You want to see risky? Forklift in a massive process implementation after eight or ten weeks of design work and try to identify the issues (or benefits) that are associated with what you just did. Now that's risky!
Of course, if you release a new process or a process change and then ignore it and move on to the next challenge, you've missed the point. When performing continuous deployment of process, you must monitor the results. Did it work? Did it cause unintended consequences? The way to tell is through another software technique called "A/B testing."
You've created the smallest, leanest process possible, you've implemented it using continuous techniques, now what? Now you need to test the results. Often, process implementations are treated almost like a bullet to the head, one shot and it's over. The software world has taught us nothing if not the need for constant review of the effectiveness of each "release." Imagine software that was released, had bugs, and was never reviewed or fixed. How likely would you be to call that software a success or to recommend it to a friend or colleague? In the software world of agile development, a technique called " A/B testing " or "split testing" is used to determine the implications of a recent release.
Here's how A/B testing works: you are doing continuous, small deployments so each piece of functionality is relatively easy to understand in terms of its implication to the users. When you deploy this small functionality change (the "A" functionality), you deploy it to a sub-set of the users and compare to the users who are still using the old functionality (the "B" functionality). Think of it like a small, rapid beta test. This can have huge, beneficial implications for software, think of what would happen if you deployed a new "Buy Now" button to a website but accidentally coloured the button the same as the page background. You now have, as Eric Ries says, "a hobby, not a business model." Obviously, you would prefer to detect an issue such as this sooner, rater than later.
Use the A/B testing concept for your business process changes. Instead of deploying a changed form, website, or process to the entire set of "users," deploy to a smaller set of test users and compare the differences. Did the new process perform the way you expected? If so, deploy the change to the rest of the process users. If not, go back, re-develop that part of the process and re-deploy. Continuous development and A/B testing are a tightly linked loop of design, development, deployment, testing, and re-development. Just remember that A/B testing without continuous deployment means that mistakes will be out in the wild much longer than they should and continuous deployment without A/B testing means that issues may go unnoticed for far too long.
We need to break out of that old cycle of developing monolithic processes only to have them fail to produce the results we anticipated. In an environment where every dollar counts more than ever, we just cannot afford a 60% plus failure rate in process redesign. It not only costs us time and money, but also credibility with employees. Use the lessons from software development and build lean, minimum viable processes, deploy them quickly and continuously, and test the results against the old process. Everything you implement won't be a success, but when a mistake does occur, you will find it quickly and be able to rapidly make the changes necessary to succeed when you implement the next time around.
Kevin M. Smith is a co-founder and managing partner at NextWave. Prior to joining NextWave, Kevin held the position of Vice President at Retreon Inc. where he was responsible for development of process and programme management technology and delivery of process management and improvement professional services.
As a Senior Director at Qwest Communications, he led both the Systems Strategy and the Engineering Programme Management teams for the National Networks organisation. He spearheaded the deployment of new technologies programmes and developed innovative web tools used by the corporation to manage as many as 15,000 concurrent projects for more than 6,000 users. A "Six Sigma Black Belt," Kevin also led corporate process management and improvement initiatives at LCI International and MCI, and served with Booz, Allen Hamilton as a consultant in their Process Improvement practice.
Kevin holds a B.S. in Finance, as well as an M.B.A. in Process Management, both from the University of Maryland, College Park.