Case Study: Developing KPIs to Drive Cultural Change
- Adam Witthauer

- Oct 15
- 10 min read

I became a manager during a time of very significant change. The entire reason the role I was hired into existed was due to a combination of an increase in scope as well as an increase in customer expectations, and as such we had been resourced to significantly increase our staffing. My department was created as one department split into two, and my first order of business was to double my staffing.
In addition to increasing expectations and increasing scope, our entire product portfolio's definition was also undergoing a massive overhaul. This overhaul would require an incredible amount of work in the short term, with the tradeoff being much greater efficiency in the long run. My team was responsible for quality engineering, so for us this meant incorporating an unprecedented number of changes into our inspection plans, purchase order quality contract requirements, laboratory and precision measurement plans, etc.
Signs of Trouble
Every function that touched the product we supported was spread thin. Our suppliers were being pushed so hard to meet the increased scope that we discovered those obscure systemic issues that you really only find when things are pushed to the limit. All of our engineers were already spread thin managing the huge changes coming to our product definition, and even our buyers were spread thin dealing with how these changes affected open POs at a supplier who was already struggling to meet delivery dates. While this change was painful, we were all passionately driven by the idea that in the end we'd have a much better definition set, and life would get better.
Our inspection team found themselves at the focal point of all of these challenges. In addition to the increased scope and constantly changing inspection plans, they were hugely impacted by cost of quality issues. Nonconformances drove cascading increases in the amount of inspection required to address the newfound systemic issues. We were a data-driven organization, and we counted on our inspection team to get us this data.
All eyes were on our inspection teams, and any barriers to their ability to get their job done would not be tolerated. Unfortunately, it was my team that was the greatest offender. We had a backlog of orders that inspectors couldn't begin work on until inspection plans had been updated. The first response, naturally, was to focus on this static backlog and allocate resources within the department to drive it down. We had a great team, and we had a cooperative culture of helping across groups within the department when necessary.
Deeper trouble
That bought us some relief, but we still had new orders showing up on the backlog. We had a problem that was going to take more to fix than just an increase in headcount and load balancing.
One reason we had for not having plans up to date on time was that they often had external dependencies that took some amount of time to deliver. One trend the data showed me was that we weren't beginning updates to the inspection plans until days before we expected the order to be submitted. Not only did this cause issues when the plan had external dependencies, but there were also issues given that our supplier's increased tempo made their deliveries less predictable. Our supplier was adjusting their deliveries to maximize the efficiency of their operations, and a side effect of this was that sometimes you'd have orders show up a couple weeks earlier than expected, before my team had even started working on the plan updates.
Getting to the Root
I went to my team to understand why we were waiting until the last minute to incorporate changes. Their reasoning was logical: The widespread overhaul of our product definition drove so many plan updates that they decided they could minimize the number of inspection plan revisions they had to do if they just do one large revision right before parts are submitted.
This strategy made sense in that it reduced the administrative overhead associated with creating a revision, and also in that by reducing the total number of revisions that existed, it meant fewer plan revisions would have to be reconciled as parts moved through inspection.
However this strategy was counter to what the rest of our greater organization considered as best practice, which was to incorporate all changes as soon as practical. Doing this maximized the time available to deal with any 2nd or 3rd order effects that could arise from incorporating a change in inspection, not to mention the issues driven by external dependencies that we were already experiencing. While it would be possible to expedite the work done by these external partners, the extra cost and effort associated with this would be entirely unnecessary if changes were simply incorporated sooner.
Delaying incorporation meant that instead of incorporating a change across-the-board, it would be incorporated on one part on one date, and then then another part weeks later. This introduced the potential that the changes might not be incorporated consistently across all parts.
My engineers were smart enough to understand this risk and helped mitigate it by giving a brief review of every change order to identify changes that could pose a quality risk if they weren't immediately incorporated across all parts at once, but there were a couple problems with this:
This tactic drove redundant change order reviews, as the default was still to delay incorporation
These redundant reviews created additional opportunities for errors or inconsistent application
Trying to manage this effectively creates another thing to manage
A second order effect is that the aftermath of this "just in time" approach is that these issues have a much higher probability of being escalated, resulting in higher levels of management having to spend more time on these issues.
Finally, there are questions as to whether one very large revision is preferable to several smaller revisions. In terms of revision creation, the administrative overhead was a difference of a couple clicks per revision. In terms of documenting changes, the difference came to whether one long change log was better than several short change logs. Realistically there didn't seem to be an advantage.
A New Metric and its Impact
We identified that the existing norm was to delay incorporation, with the intent that this was more efficient. However as noted above this tended to actually drive redundant work while also introducing quality risks. Our desired norm was that we wanted incorporation to happen as soon as changes were released. This behavior could be quantitatively defined by a new "inspection readiness" metric that represented what portion of an engineer's product was ready for inspection, and how many days of slack until the first order that was not ready.
The concept was that engineers should complete this routine work early, and doing this created a sort of "workflow reserve." This reserve was then quantified by the inspection readiness score. This score represented how many "days of reserve" an engineer had in their workflow for routine work. These "days of reserve" allowed greater flexibility and focus when urgent issues arose.
Implementing this new metric proved effective in bringing my team's service level from the greatest cause for concern to the best in the division. My peer manager in inspection was very happy with its effectiveness and asked if we could expand implementation to other teams.
Fortunately I had some good news for her, as I had already discussed this with our Principal Engineer in charge of our Division's data committee. I was serving on his panel that was outlining some new reporting structures that would expand this concept to an automated and more holistic solution that expanded this concept across functions. While I was not in the organization long enough to see this come to fruition, I was able to share what I had learned that proved effective.
Costs and Benefits of a New Metric
I will also take a moment to point out that while this new metric was effective in driving this change in behavior, it is another metric to track, and this did bring its own overhead to the tune of a couple hours a week from one member of my team manually collecting the data (an automated reporting system wasn't possible in the current configuration), probably 4-6 hours of my time creating the calculation, compilation and reporting systems, and 10 minutes a week compiling the data to share.
Could we have changed this behavior without creating a new metric? It is possible that I could have campaigned it. Most likely more half the team would have ran with it. From those in the other half, some may have had technical challenges from constraints I was unaware of, and some likely just wouldn't have understood the significance in this change.
Most importantly, having the metric allowed me to address these issues and concerns proactively, instead of waiting for people to continue having late orders. The metric enabled me to have conversations on the subject right away, and the data associated with it made identification of technical barriers easier.
Having the data and publishing it as a scorecard item made the goal measurable; a core criteria in SMART goals. It rapidly drove accountability across the entire team, and on top of that, the way the data was reported gave me additional insights across my team of how everyone's workloads changed over time. I knew who had a busy couple months coming up, and who had the bandwidth to help out.
As a final note, I am an advocate that KPIs should be reviewed regularly to see what aligns with your organization's 1-5 year plans. In this case, this KPI aligned with my strategic goal to create flexibility by completing routine tasks more proactively.
Lessons Learned Using KPIs to Drive Cultural Change
Below I outline some of the factors that contributed to my success in implementing this. Gaining buy-in on a metric that plays a role in each member of your teams' success requires a high degree of emotional intelligence and socialization.
Step 1: Understand what is driving the current situation
What is driving the team to choose their current priorities? What constraints or barriers exist? Do they not understand the importance of the item in question, or is the current behavior a workaround to a different issue?
The most fundamental requirement for success is to fix the underlying systemic failures that are driving this behavior. In this particular example, I discovered that there was actually an issue with our change control system caused by an update to our MES that was contributing to the behavior. In the early stages of this venture I had another one of my engineers kick off a smaller project to correct this. Removing this barrier was critical to the success of the project.
In many cases just fixing the systemic issue may be adequate to obtain the desired behavior. This could be verified by considering whether the systemic issue was a primary driver to the undesired behavior or just a contributing factor. In this particular case however, the primary driver was a widespread perception that delaying incorporation was actually more efficient.
By being able to demonstrate an understanding of the constraints that were driving undesirable behavior, you will gain your team's trust.
Step 2: Develop a potential new metric
Feel free to pilot several variations of the metric. An important aspect is its sensitivity and bias across the expected range of values. For example, what numerical range is acceptable? Does it fall on a linear range, or is it exponential? Is there a progressive response as the metric varies from good to bad; in other words is there an identifiable "marginal" range or does it just snap from good to bad without warning?
Ideally your people should have enough warning from an "ok" to "marginal" range that they can make adjustments before they end up in the red zone.
Step 3: Begin benchmarking
Communicate to the team that you are now experimenting with the metric to establish a baseline. Assure them that because there is so much variation between part assignments and overall situations that no judgement will be given to baseline values, and that data will not be assessed for performance until the first performance cycle after the metric becomes effective.
Be very transparent in why you are doing this, and that you are bringing visibility to where the team is doing well and where they need help. Follow through on this commitment by discussing concerns regarding these metrics without judgement, always seeking to understand.
Explain your algorithm in detail, and what factors drive the baseline data. Make open calls to the team that you are open to concerns with the metric, and expect that most of your real feedback is going to come via email, chat, or in one-on-one meetings.
Make your algorithm, data, and any scoring spreadsheets available. Your team is going to find errors in calculations or incorrect assumptions that you are blind to. Be open and graceful to this feedback and responsive to errors. Following up when you think you have it fixed will gain their trust, and also serve to confirm whether you actually fixed it or it still needs work.
Note that there are likely some people who will have concerns or confusion about the intent of the new metric. This is natural; remember that you are changing what is valued on the team. If your team is passionate and driven, it is healthy that they should be concerned about what is driving a change to the team's values. Understand these gaps and bridge them.
Step 4: Implement and monitor
Make and communicate a plan to formally implement the new metric well in advance. You will want several months of data to establish a baseline. For the first performance cycle, it may be a good idea to establish this metric as a stretch goal that is not a minimum expectation, but rather a differentiator that can contribute towards an exceeds expectations rating.
For those who have low scores, work to understand what is driving them. Acknowledge the challenges the face, remove any barriers, and ask how they plan to improve it. If they simply need to catch up, emphasize cycle over cycle improvements rather than raw scores.
Conclusion
In navigating the complexities of increased scope and customer expectations, my experience has underscored the importance of proactive change management and effective communication within teams. By understanding the root causes of delays and implementing a new metric focused on inspection readiness, we created cultural change and transformed our department's performance from a point of concern to a model of efficiency. This journey highlighted not only the value of data-driven decision-making but also the necessity of fostering a culture of accountability and collaboration.
As you reflect on your own organizational challenges, consider how proactive metrics can drive meaningful change in your teams. Engage with your team members to identify barriers they face and explore potential solutions together. Implement a system for tracking progress and celebrating improvements, no matter how small. By fostering an environment of transparency and continuous improvement, you can enhance your team's performance and contribute to your organization's long-term success. Start today by initiating a conversation about the changes you can make to improve efficiency and quality in your work processes.
Thanks for reading! If you found this helpful, feel free to share it with your network or reach out with your own engineering challenges. I’m always up for a good problem-solving chat.




Comments