Continuing the dialog with Troy Magennis on estimating lead times

/Continuing the dialog with Troy Magennis on estimating lead times

Continuing the dialog with Troy Magennis on estimating lead times

When I posted my previous blog entry on estimating when a task would be complete, I hoped that Troy Magennis would weigh in. He did with an excellent comment to that entry. You should check it out.

I think  difference of our approaches comes down to this. If  you need a single a priori distribution to estimate the end-to-end completion times, Weibull is probably the way to go (Some prefer Rayleigh). I agree with Troy’s reasoning and know that he has a great deal of success with how he uses them. Certainly, as Troy points out, Weibull is a better choice than triangular is this case.

In the approach I advocate, one builds up a chain of distributions and combines them using Monte Carlo simulation. In the example in the previous entry, one distribution is when the work would start and the other is the duration of the work. One might add a third for the time it takes for the work to be accepted. Using triangular distributions for the component distributions does make sense:

  • They are easy to elicit – Asking for best case, worse case, likely case is easier than estimating the Weibull parameters. Indeed, following Douglas Hubbard’s ideas in his text How to Measure Anything,  staff can be easily trained to provide the triangular  parameters.
  • They give one several control points in the process and support a ‘what-if’ analysis.
  • I suspect, but don’t know, this approach converges to Weibull.

So, which to choose? I think the consultant’s answer is best, “It depends.” If you want a good, workable view of the lead times, go with Weibull. If you want the more granular view of the process and can  do Monte Carlo simulation, I would suggest building the chain of triangular distributions.

Again, thanks for Troy for weighing in. I look forward to continuing the dialog. They may be interesting hybrids. Are there any other opinions out there?

By | 2016-11-17T14:30:56+00:00 February 25th, 2014|News|1 Comment

About the Author:

I'm founder and CTO of Aptage. I help Management teams and professionals manage better in the face of uncertainty.

One Comment

  1. troymagennis February 28, 2014 at 4:52 pm - Reply

    Hi Murray,

    Thanks for your follow-up note.

    Just a quick question: Do you get people to estimate the “start time” distribution for all tasks? Wouldn’t they just need to estimate the first task start time, and the other tasks cascade from there? I think that’s how i read your material.

    I think we actually do the same or very similar thing (depending on the answer above). I model with this granularity when looking deeper into sensitivity analysis. It does answer different questions, namely being if there is one thing I could spend money to accelerate or fix, what would it be. I take a slightly different approach though i’d like to share –

    1. I get teams to estimate the 10th and 90th percent estimate, like Hubbard and Evans suggest. 5 or 95 if they are detail oriented OCD type of folks (AKA devs)!
    2. I model using UNIFORM distribution (maximum entropy, and who can really be sure within that range)
    3. I perform a sensitivity analysis to determine what factors matter most in this
    4. I revisit the top 10 with experts and do more analysis of distribution assessment.
    5. I update the model and make decisions based on this.

    My view is, that if the factor isn’t important at uniform distribution, it wont be significant at any of the other distributions, which dampen one end, normally the high bound. If I can prove those aspects are “less important” I can drive focus on the ones that are. Normally, defect rates, dependencies, and just a few tasks turn out to be key.

    It is going to take a lot of convincing for me to sign-up for people being capable to accurately estimate (with respect to Hubbard and Evans) factors that are un-related to the original task starting or work time. I agree they can estimate numerical absolutes related to “their” influence of work time, but not others like: critical prod issue impacting them, or paternity leave of a critical team member, etc. My gut guess with your simulation is the result would NOT be “Weibull”. I think the estimates are all correlated with the original tasks. If it has a tail, it will be thick. Law of large numbers would lead to a more Normal distribution. What are you seeing in actual shapes?

    Here is my consultant “It Depends” decision tree:

    If answering questions on: Staff and Date: Monte Carlo on Lead Time, defect rate, scope creep, amount of work scenarios, and teams sizes/skills.

    If answering questions on management interventions, I model in more detail the influences on lead time to do sensitivity analysis. I break down cycle time into its constituent aspects, sensitivity test, and refine distributions only on the ones that matter.

    Love the discussion,
    Troy.

    PS. Weibull is the family of distribution. Rayleigh is a form of Weibull with a shape parameter of 2.0. This is one reason i like Weibull as my guess for lead-time distributions. I can see how different practices drive that shape parameter lower with more focus on reducing “delays” “queues” with Lean or Agile practices….So I think we all agree on the same shape, but still looking for the black swan to prove me wrong here.

Leave A Comment