Are you looking for the Holy Grail of DevOps, that mysterious purple unicorn? Are you trying to reduce batch size and get closer to single piece flow? Learn how measuring DevOps performance will help you achieve these goals and check out our on-demand webinar, Measure DevOps Performance with VersionOne.
Single Piece Flow, the Holy Grail of DevOps Performance Measurement
We’re all seeking the Holy Grail of DevOps and that certainly involves efficiency and automation, but to truly find the holy grail you must measure your progress along that DevOps continuum.
Some people refer to those companies that have found the Holy Grail as purple unicorns. If you’ve read Jez Humble and David Farley’s book on Continuous Delivery, and if you look at what companies like Netflix, Amazon, LinkedIn, Facebook, and Amazon are doing, the thing that they all have in common is single piece flow.
By single piece flow I mean the max work in progress (WIP) is no higher than one across every single phase of the value stream. That starts from the minute we start working all the way to value consumption. Most organizations do not have single piece flow throughout their entire value stream.
In fact, the vast majority of organizations have much higher work in progress levels, especially as you move towards delivery. As you get towards late stage, pre-production phases of the value stream, the work in progress levels tend to go up and up quite a lot.
Batch Size & Single Piece Flow
This bring us to batch size. Batch size is the average number of backlog items at any stage of value delivery. You noticed I didn’t use the word deployment. I focus on value delivery instead of deployment. A lot of times we think that once we deploy from one environment to the next or we deploy into production that we’ve delivered value, when often we’re just be moving from one phase to another or using feature toggles.
When I talk about delivery, I’m really focusing on value delivered all the way to the point where it’s being consumed by end users. I’m not suggesting that you focus on batch size as a single metric or as the most important metric because that could lead to some outcomes that are not desirable, but we do think it’s a really good indicator of overall agility and progress towards DevOps transformation.
How to Reduce Batch Size
The longer your release cycle, the larger your batch size and that creates some challenges. The larger your batch size the more important are performance measurement and data-driven DevOps.
The best way to get to smaller batch sizes is performance measurement. There are a couple of ways that we’ve seen organizations successfully reduce batch size.
The first is to take their legacy applications and just start over with a greenfield application. That may not be practical in a lot of cases.
Decomposing Legacy Applications
There’s another pattern that’s fairly common, which is decomposing legacy applications one piece at a time into micro services. That might be an option that some of you are already pursuing. It is continuous and incremental improvement. Our suggestion is with better performance measurement and better metrics about how your value streams work, you can really accelerate your transformation from lower maturity to higher maturity via smaller batch size.
Forrester found that 94% of people expect to see significant improvements through better value through management and 84% are looking for end-to-end traceability from business initiatives and source code components through test assets, all the way to deployed components. This shows that many organizations have a lot of interest in providing better measurement of the DevOps machine itself.
What is DevOps Performance Measurement?
Here are a few examples of performance measurement metrics that when leveraged can help organizations dramatically reduce batch size and accelerate the journey to single piece flow.
- WIP Across The Value Stream
- Efficiency Across The Value Stream
- WIP By Phase
- Efficiency By Phase
- Commit Distribution
- Value Profile
- Rogue Commits
- Fragile Code
I like to break DevOps performance measurement into two groups. First, is measuring how value flows through your value stream. You do that using work in progress across each phase. You should think about efficiency at each phase of the value stream.
The second group of performance measurements help us better understand risk. I recently had a conversation with a CIO who pointed out that he could deploy value whenever he wanted. Deploying value isn’t his challenge. His challenge is deploying with confidence. It’s one thing to speed up flow, the counterbalance of that is to be able to speed up flow without increasing risk or hopefully reducing risk. Putting yourself in a position where you can deploy with confidence, that’s the other other part of the equation.
There are several benefits from using performance measurements. The first is visibility. I discussed the challenge we have tracking value as backlog items are converted into source code through binary artifacts. Visibility of value as it flows through every phase of the value stream is really important for decreasing risk.
Achieving faster time to value through shorter cycles is also important. The concept of failing fast. If we do have something that is not valuable, let’s find out quickly. The insight into what we should automate next, that’s a challenge that performance measurements help answer too.
This is just a small excerpt from our webinar, Measure DevOps Performance with VersionOne. Watch the full webinar to learn how we’re working at VersionOne to make business value more central to measuring DevOps performance and the importance of batch size as a measure of overall DevOps maturity. Stay tuned for part three of this blog, Measure DevOps Performance with VersionOne, where we will dive into how you can use VersionOne for DevOps performance measurement.
VersionOne is a registered trademark of VersionOne Inc. and Continuum is a trademark of VersionOne Inc.