top of page
Things-Hunches and Pseudo-Measures.jpg

The Value and Use of Hunches Fed by Guesstimates Resting on Pseudo-Measures
Things They Forgot To Tell Me in Business School


As a consultant, I spend a lot of time working with real estate market forecasts for large, long-term, market-defining land development programs in volatile economies with limited transparency and data availability. I then formulate plans and dispense advice based in significant part on the content of these forecasts. While this may seem like a dubious undertaking, it is a case of faute de mieux; for as Erasmus may have said “in the land of the blind the one-eyed man is king” and a squinty-eyed peek into the future is far better than a blind leap. Nonetheless, using thoroughly unprovable analysis of a future state to recommend ‘concrete’ actions and investments in the near term does make one reflect on such things as accuracy versus precision, measurement versus estimation, and the importance of art amidst a backdrop of data science.

Accuracy versus precision. Often used interchangeably, they are closely related but different. Accuracy is the ability to hit a bullseye, or in fancier terms the proximity of a given value to a known, standard, or desired value. Precision is the ability to group your shots, the proximity of a series of values to each other, whether these shots hit anywhere near the bullseye or not. Thus, you can be precise without being accurate, and conversely, you can be accurate at times without being precise over time.

While being both accurate and precise is clearly ideal, which one matters most in any given situation depends on the context. For a winner take all event accuracy will carry the day. Hit the target, win the prize, no matter where the other shots go. In an extended or repeat point-scoring event precision is key as consistency pays greater dividends. It is also generally easier to move from precision to accuracy, as a single adjustment can improve all efforts, whereas multiple changes may be required to turn sporadic accuracy into consistent precision.

Precision is the key for forecasting and consulting, as they are both in many ways confidence games where establishing and maintaining credibility is paramount, something best done by being consistent. In practice, this means using established methodologies and adopting clear and defensible assumptions. It is also possible to increase the odds of being accurate by growing the size of the target through sensitivity analysis and scenario building. Sensitivity analysis is just what it says, exploring the susceptibility of the forecast to changes in the values of given inputs, while scenarios place sensitivity into storylines to aid digestion. Doing this effectively hedges your bets, as in “if the economy goes up, expect these results, if it goes done, expect these results, with a sensible middle point being here,” giving you a nice fat range within which you can rightfully claim accuracy.

Measurement versus estimation. These too are related, if more obviously different. All forecast is forward estimation based on past measurement, with the difference between a good forecast and a bad one being the quality and relevance of the backward-looking measures and the soundness of the forward-looking assumptions. The trick here is to understand what is directly measured and what is adjusted, derived, filtered, or otherwise used as a proxy.

In real estate, you might build a long-range demand model for your project around measures of GDP [gross domestic product] and employment, following the understandable logic that employees have money and drive demand, while economic growth drives employment. Both GDP and employment are as close to a ‘hard’ measured reality as you will find and thus serve as a solid jumping-off point, even though experts will tell you that even these come with their own considerable baggage. They are also both lagging indicators and must be forecast forward before proceeding, with GDP leading the way and employment calculated off the back in some opaque but sophisticated manner. Macroeconomic indicators like these most often come from one of a limited number of specialist providers, and not the real estate experts themselves, with the primary data collected by governments.  These employment numbers are then dissected to identify the size and timing of the target market[s] for the project according to assumed characteristics of future consumers. Finally, a ‘capture rate’ is applied against this target market[s] to determine the actual demand for the project over time. This capture rate may be the most subjective element of the entire exercise as it depends on many uncertain behavioral and competitive factors and is extremely susceptible to the capturer’s state of mind. Thus, what may appear on the surface to be measured [“this forecast is based on GDP and employment numbers”] is in fact largely estimated.

It doesn’t take a data expert to see ways in which this process could veer off-track with any number of systematic and random errors. Yet thanks to the beauty of modern spreadsheet technology, these numbers can, and likely will, be presented with extreme and often bewildering precision.

Looked at in this light you will be excused for asking, why bother? Well, back to Erasmus, or maybe even Winston Churchill, and a spin on his famous line about democracy being the worst form of government except for all other forms that have been tried. The only thing worse than relying on this analytical house of cards would be to not rely on it. Almost all land development projects will use similar processes in their decision-making, and they certainly all don’t fail, and success cannot be attributed solely to random chance, so something must be going right.

This is where the art comes in. Forecasts are often presented as rigorous data science but amount just as often to hunches fed by guesstimates resting on pseudo-measures. This doesn’t make them wrong it simply means they must be treated with equal parts respect and caution. There are several characteristics that smart practitioners bring to the table in both the formulation and utilization of these analyses that overcome many of the weaknesses in data, methods, and outputs. They understand the value and limits of the data presented, taking it seriously but not dogmatically. They allow for significant margins of error in decisions making that reflect the derived nature of the results. They assess the probability of key assumptions actually holding true and never underestimate the propensity some groups have for wishful thinking and others for conservative thinking. They avoid the common pitfalls of false precision by not accepting the use of implausibly precise analysis to give the appearance of truth and certainty [“the capture rate for millennials is 37.4%”], and they do not assign significant meaning to negligible differences in data [“returns drop from 12 to 10.7% in year 8”]. They apply their accumulated experience and sense of the market to gauge the reliability of the output. In short, smart practitioners apply common sense, an increasingly undervalued and uncommon trait in today’s data-driven world.  

©2022 by GMR2 Services Inc.. Proudly created with Wix.com

bottom of page