How to measure anything


Over the past year I have had the opportunity to advise a number of people I have worked with in the past who are starting companies. In the past few years my role had been that of a director of both design and R&D teams and so I enjoy continuing that type of role in providing direction to these exciting new startup ventures.

One thing many of these early phase projects lack is clarity and proper measurement of the problem and opportunity; having only a vague sense of their relative probability of success given their chosen direction versus alternative approaches.

Lately a lot of new startups tend to point to some fuzzy “user experience” as their advantage, and why they will succeed. As a design director I know that this is B.S. and even design and user experience contributions can be measured to a reasonable degree of confidence.

More times than I care to say, I have pointed teams to read and practice the techniques in "How to Measure Anything: Finding the Value of Intangibles in Business" by Douglas Hubbard.

The first response is always, “Read a book, are you serious” --- yes I am serious, this is a good book and for many people it provides some very important tools for success around estimation and prediction. If you learn and use this on your project it will become automatic for you, and require little thought. It just takes practice.

Here are just a few key excerpts to motivate you to read it:

There are three reasons why people think things can't be measured, all are misconceptions about different aspects of measurement: Concept, Object, and Method.

  1. Concept of Measurement. The definition of measurement itself is widely misunderstood. If one understands what measurement means, a lot more things become possible.
  2. Object of measurement. The thing being measured is not well defined. Sloppy and ambiguous language gets in the way of measurement.
  3. Methods of measurement. Many procedures of empirical observation are not well known. If people were familiar with some of the basic methods, it would become apparent that many things thought to be immeasurable are not only measurable but may already have been measured.

Hubbard does a good job of making it clear that these three misconceptions are easily overcome, and you can learn to measure anything quickly. More importantly perhaps, you will end up with a better defined question or problem through the process of identifying measures.

He uses a definition of measurement based on uncertainty that is very powerful, echoing Shannon's information theory of entropy:

Measurement is a quantitatively expressed reduction of uncertainty based on one or more observations.

By framing measurement as “uncertainty reduction,” Hubbard has solved the problem for many people who get stuck thinking of measurement as an impossibly exact science. For as Hubbard points out, even inexact measurement is valuable if it reduces uncertainty by some non-trivial amount. This is especially true when investment in the project is high.

He spends several chapters on techniques for determining what to measure and how, and then get's into what I found to be a great exercise for teams. It involves calibration of estimation for team members. By evaluating how calibrated people are at estimation (usually not!) we can get the team to understand they they need to improve, and more importantly with the right tools they can learn to be quite proficient at estimation. Having this skill will help them in many ways in their day-to-day work.

Hubbard discusses the terrible misunderstanding and misuse of risk management in business and R&D projects. You know, those useless Low/Med/High values people put on their slides! He instead prescribes the use of basic confidence intervals and shows the reader how to use them.

*A simple example is his test of confidence.* For a given question you provide a range you believe has a 90% CI. For example you say with 90% confidence the module will take you 2-3 days to code, unit test, and submit to system test. Or the wireframe will take you 3-6 hours to update, get reviewed, and hand to engineering.

    Then ask yourself this: Suppose I offered you a chance to win $1000 in one of two ways:
  1. You win $1,000 if you your estimation turns out to be correct. If not, you win nothing.
  2. You spin a dial divided into two unequal “pie slices,” one comprising 90% of the dial and the other just 10%. If the dial lands on the large slice, you win $1,000. If it lands on the small slice, you win nothing (i.e., there is a 90% chance you win $1,000).

About 80% of people choose option 2, but why would this be if their estimate is of 90% CI or higher. In reality they probably are more like 50%-80% confident. For those who choose option 1, it could also mean they are over 90% confidence but being conservative in estimating, this can be equally undesirable.

This is the Equivalent Bet Test. You should try to set up a mental test like this when assessing confidence and if you pick the wheel then keep lowering the win percentage on the wheel (80%, 70%, 60%, ...) until you don't pick the wheel, that will tell you your real confidence in the answer. Research shows turning it into a bet (real or pretend) increases the quality of the estimates.

People who are very good at assessing their uncertainty (i.e. they are right 80% of the time they say they are 80% confident, etc.) are called calibrated. It is good to be calibrated, projects and life become more predictable.

This was just a short teaser of a simple technique, but there are more techniques. I highly recommend Hubbard's book for anyone involved in complex endeavors whether on their own or in teams. You will find problems are more clearly articulated once you have figured out how to measure it; having good calibration in estimation will also significantly reduce your project risk and improve your decision making significantly.

There was a recent question on Quora: Why are software development task estimations regularly off by a factor of 2-3? That I believe is true because of teams not doing proper analysis, but also it is a multiplier effect of bad estimation across a team. Because of interdependencies of teams these estimation errors can cause huge estimation errors when rolled together. Get the book: How to Measure Anything: Finding the Value of Intangibles in Business

books 
comments powered by Disqus