Quantifying Impact

How to quantify the impact of your actions

Crossover works because we make our decisions based on detailed impact analysis, based on real data. To come up with great insights and get your actions approved and done, you have to show their impact.

Within RemoteU we give you all the tools you need to be able to do this, but we’ve seen many managers fail because of the lack of proper analysis.

How can someone fail on Impact Analysis?

The VPs and SVPs only approve actions that have proven impact, because they don’t want to waste their time.

If you present an insight without one, you will get rejected and your idea will get delayed, which means that your action will be delayed and your impact on the team is also delayed, which we will see in your weekly metrics.

The more this happens the higher chance you have to fail your metric goal and the program.

From another perspective, if you fail to calculate an accurate enough impact, that might lead you to focus on the wrong insights.

Imagine that you calculate the impact of your action as a 20% productivity impact, spend all your time on it and when it’s done it turns out you made a calculation mistake and the impact was only 2%.

This means that you wasted all of your time on such a small impact action and again, decreasing your impact on the team and increasing the chance of your failure.

Quantify based on QE team’s Report

This is the best way to quantify quality improving insights because the Quality Enforcement team in a team reviews every single delivered unit.

In XO, when quantifying we either look at last week’s or the trailing 4-week’s data. The QE team has all the data you need.

Quantify based on Enforce Quality Bar

On your Enforce QB Summary tab, on the deliverable template, you should see the quality failures aggregated by root cause. The % of failures in one category gives the impact of your action.

This solution is great if you don’t have a QE team or if you don’t agree with your QE team’s standards and want to set the bar higher.

Quantify based on TMS

This is your best way to quantify the productivity impacts of your decisions. The TMS shows the differences between two ICs working on the same role, on similar tickets and highlights best practices, tools, tricks by your top performer (and sometimes by your bottom performer). When you identify the root cause of why the TP outperforms the BP on a given step or set of steps, then you calculate the impact in seconds, then divide it by the bottom performer’s total execution time to get your impact.

See the following example TMS:

If I realize the exact action why my top performer performs Step 2 so much faster, then I can quantify its impact:

Frequency

When you create your impact analysis you have to think about whether your action is issue-specific, product-specific, task-type specific, or it can fix issues for every IC.

For example, in a coding team, where you can have a team of java, C#, python, C++ developers all in one group, you can’t just provide a tool for C# and claim that the full team’s productivity will increase because of it. You either have to multiply your impact with the frequency (in this case 10%*0.25 = 2.5%) or provide a tool for every language if the issue exists for everyone.

Quantify based on ZBT

Once you’re done with your ZBT you should have the knowledge and experience to calculate the impact your decisions will have on the team room, by comparing the top performer's numbers to your own numbers. A good practice is to do the same ticket in your ZBT as your top performer did in his TMS - this way you can have a clean comparison between the most ideal scenario and their work.

See the following example ZBT:

Let’s say Step 1 and Step 4 are administrative tasks that we’re eliminating/optimizing during ZBT. We can calculate the impact by subtracting the new ZBT values from the TMS values (8-0+7-3) then dividing it by the top performer’s total TMS number (34), like:

Quantify based on Experiment

If you have no way to prove your idea without testing it, you should run an experiment. In real life, to conduct an experiment you’ll need at the very least two ICs with similar expertise and quality/productivity metrics. You will implement your idea with one of the ICs while the other one will be the control group. Ideally, if you have the capacity, you should give the exact same ticket to both ICs. You should record them working and do a TMS on these videos. Then quantify your impact based on the TMS (see above)

Last updated