📈
Manager RemoteU
  • Curriculum README
  • Background and Context
    • What is Manager RemoteU?
    • Why Should I Take RemoteU?
      • Testimonies (Don't Take Our Word for It)
      • RemoteU Prepares You for the Future of Work
    • What Makes RemoteU Different?
    • Our Coaching Philosophy
  • On Prem vs. Remote
    • Exposing half-truths about remote work
    • Sync vs. Async
    • Managers, Makers, and Deep Work
    • How to Avoid Burnout and Protect Your Mental Health
    • Combat Loneliness with a Great Social Life
    • 3 Ways to Build Trust With Remote Employees
    • How Remote Workers Make Work Friends
  • IC Skills
    • Mastering IC skills
  • Monday Week 1
    • Day 1 README
    • Readings
      • WSPro, the double-edged sword
      • Content vs. Process Insights
      • The Most Common Reasons RemoteU Managers Fail: How to Avoid Them, and How to Succeed (Part 1 of 2)
      • The Most Common Reasons RemoteU Managers Fail: How to Avoid Them, and How to Succeed (Part 2 of 2)
      • How to fix products (how to execute content insights for fixing products)
      • Time Motion Study
      • Tips & Tricks from Graduates
    • Examples
      • Content Insight Examples
      • Process Insight Examples
  • Tuesday Week 1
    • Day 2 READ ME
    • Readings
      • Daily Check-In Chats
      • Creating Calendars
      • How to Be a Great Coach
      • How the WSPro Frameworks Fit Together
    • Examples
      • Daily Check-In Chats - Good Example 1
      • Daily Check in Chats - Good Example 2
      • Daily Check-in Chat - Good Example 3
      • Daily Check-in Chat - Bad Example
      • Create Calendar - Good Example 1
      • Create Calendar - Good Example 2
      • Create Calendar - Bad Example
      • How to translate calendar into the Crossover Activities App
  • Wednesday Week 1
    • Day 3 READ ME
    • Readings
      • How to Enforce The Quality Bar
      • How to Deep-Dive
      • How to improve quality when FTAR is 100%
    • Examples
      • Enforce The Quality Bar Example 1
      • Enforce the Quality Bar Example 2
      • Enforce the Quality Bar Example 3
      • Bad EQB Example 1
      • Deep Dive Example 1
      • Deep Dive Example 2
      • Deep Dive Example 3
  • Thursday Week 1
    • Day 4 READ ME
    • Readings
      • Rank and Review
      • Insight Anti-Patterns
      • Good Coaching vs. Coaching Anti-Patterns
      • Quantifying Impact
    • Examples
      • Rank & Review - Good Example 1
      • Rank & Review - Good Example 2
      • Rank & Review - Good Example 3
      • Rank & Review - Bad Example 1
  • Friday Week 1
    • Day 5 README
  • Monday Week 2
    • Day 8 READ ME
    • Readings
      • Zero-Based Target
      • TMS vs ZBT
    • Examples
      • TMS vs ZBT Examples
      • ZBT - Good Example 1
      • ZBT - Good Example 2
      • ZBT - Good Example 3
      • ZBT - Good Example 4
      • ZBT - Good Example 5
      • ZBT - Bad Example
  • Tuesday Week 2
    • Day 9 README
    • Readings
      • Gemba Walks
    • Examples
      • Gemba Walk Example 1
      • Gemba Walk Example 2
      • Gemba Walk Example 3
  • Monday Week 3
    • Day 15 README
    • Readings
      • Shrink to Grow
      • Building the 2-Slide Deck
    • Examples
      • Shrink to Grow Example 1
      • Shrink to Grow Example 2
      • Shrink to Grow - Bad Example
  • Tuesday Week 4
    • DAY 23 README
  • Wednesday Week 4
    • DAY 24 README
    • Readings
      • The 2-slide Deck and Summary Anti-patterns
      • Quality bar for The 2-Slide Deck
      • MRU Oral Exam
      • Success After Graduation
    • Examples
      • 2-Slide Deck - Good Examples
      • 2-Slide Deck - Bad Examples
      • Oral Exam - Examples
  • Work In Progress (Please ignore)
    • Culture and Diversity
    • Feedback and Coaching
Powered by GitBook
On this page
  • How to quantify the impact of your actions
  • How can someone fail on Impact Analysis?
  • Quantify based on QE team’s Report
  • Quantify based on Enforce Quality Bar
  • Quantify based on TMS
  • Frequency
  • Quantify based on ZBT
  • Quantify based on Experiment

Was this helpful?

  1. Thursday Week 1
  2. Readings

Quantifying Impact

How to quantify the impact of your actions

Crossover works because we make our decisions based on detailed impact analysis, based on real data. To come up with great insights and get your actions approved and done, you have to show their impact.

Within RemoteU we give you all the tools you need to be able to do this, but we’ve seen many managers fail because of the lack of proper analysis.

How can someone fail on Impact Analysis?

The VPs and SVPs only approve actions that have proven impact, because they don’t want to waste their time.

If you present an insight without one, you will get rejected and your idea will get delayed, which means that your action will be delayed and your impact on the team is also delayed, which we will see in your weekly metrics.

The more this happens the higher chance you have to fail your metric goal and the program.

From another perspective, if you fail to calculate an accurate enough impact, that might lead you to focus on the wrong insights.

Imagine that you calculate the impact of your action as a 20% productivity impact, spend all your time on it and when it’s done it turns out you made a calculation mistake and the impact was only 2%.

This means that you wasted all of your time on such a small impact action and again, decreasing your impact on the team and increasing the chance of your failure.

Quantify based on QE team’s Report

This is the best way to quantify quality improving insights because the Quality Enforcement team in a team reviews every single delivered unit.

In XO, when quantifying we either look at last week’s or the trailing 4-week’s data. The QE team has all the data you need.

Quantify based on Enforce Quality Bar

On your Enforce QB Summary tab, on the deliverable template, you should see the quality failures aggregated by root cause. The % of failures in one category gives the impact of your action.

This solution is great if you don’t have a QE team or if you don’t agree with your QE team’s standards and want to set the bar higher.

Quantify based on TMS

This is your best way to quantify the productivity impacts of your decisions. The TMS shows the differences between two ICs working on the same role, on similar tickets and highlights best practices, tools, tricks by your top performer (and sometimes by your bottom performer). When you identify the root cause of why the TP outperforms the BP on a given step or set of steps, then you calculate the impact in seconds, then divide it by the bottom performer’s total execution time to get your impact.

See the following example TMS:

If I realize the exact action why my top performer performs Step 2 so much faster, then I can quantify its impact:

Frequency

When you create your impact analysis you have to think about whether your action is issue-specific, product-specific, task-type specific, or it can fix issues for every IC.

For example, in a coding team, where you can have a team of java, C#, python, C++ developers all in one group, you can’t just provide a tool for C# and claim that the full team’s productivity will increase because of it. You either have to multiply your impact with the frequency (in this case 10%*0.25 = 2.5%) or provide a tool for every language if the issue exists for everyone.

Quantify based on ZBT

Once you’re done with your ZBT you should have the knowledge and experience to calculate the impact your decisions will have on the team room, by comparing the top performer's numbers to your own numbers. A good practice is to do the same ticket in your ZBT as your top performer did in his TMS - this way you can have a clean comparison between the most ideal scenario and their work.

See the following example ZBT:

Let’s say Step 1 and Step 4 are administrative tasks that we’re eliminating/optimizing during ZBT. We can calculate the impact by subtracting the new ZBT values from the TMS values (8-0+7-3) then dividing it by the top performer’s total TMS number (34), like:

Quantify based on Experiment

If you have no way to prove your idea without testing it, you should run an experiment. In real life, to conduct an experiment you’ll need at the very least two ICs with similar expertise and quality/productivity metrics. You will implement your idea with one of the ICs while the other one will be the control group. Ideally, if you have the capacity, you should give the exact same ticket to both ICs. You should record them working and do a TMS on these videos. Then quantify your impact based on the TMS (see above)

PreviousGood Coaching vs. Coaching Anti-PatternsNextExamples

Last updated 5 years ago

Was this helpful?