Driver-Based Planning Step 2: Defining reports and analyses

Posted by  Michael Coveney

Managers typically make decisions based on what is presented to them.  However, it’s essential that reports and analyses are presented in context of what is going on in the organisation and in the business world as a whole, otherwise the wrong conclusions will be made. For example, a drop in sales may be due to the general downturn of the economy or a new competitor introducing a similar product at a much reduced price.

By not considering all the ‘drivers’ of performance and presenting them in a straightforward manner, decisions could be taken that not only impacts other departments ability to perform, but through a domino effect then greatly disadvantages the organisation.

So what does an effective report look like?   Well that depends on the decision(s) that senior executives want to take.  It’s no good just producing a comprehensive reporting pack and expecting managers to stroll their way through it and coming to the right conclusions.  What is vital is that each report has numbers that are reported in context of the decision to be made.

Let’s assume that the decision is to assess whether the current sales forecasts are accurate.  Then the report needs to show sales forecast in comparison with the budget (which itself should be a forecast from last year); and in comparison with the last forecast.  This should show data for a number of periods – both actuals and into the future.  What the reader should be looking at is whether the forecast follows a definite trend or is ‘random’ and hence has no logical background.  It should also contain a forecast from the people involved as they may know something that is not possible to model, such as a major decision being delayed.

The next thing to display are the ‘drivers’ that went to make up the forecast, again with actual and historical trends.  We are trying to assess whether the relationships defined in the model still make sense.  Finally, there needs to be some commentary either from the sales people involved, or about the data being used.  For example, external data may be out of date or just a ‘best guess’ that may not be accurate. 

It’s only by considering the forecast in relation within in its context can a realistic ‘best assessment’ be made.

With models that are involved in scenario analysis – for example looking at the impact of a change in structure or assessing a range of driver values – the report will probably need to provide the reader with a ‘side-by-side’ comparison of each of the changes.  Most modelling systems struggle with this unless the model is ‘duplicated’ and each scenario run as a separate modelling exercise.  In these situations the report will need to be able to access multiple models – one for each scenario – and display them as a single report.  It’s essential though that each scenario is documented so that a casual reader will know what each column represents.

Having done all this, we are still not ready to start building the model.  In my next blog I will look at defining who will be the users of the model and their role.

Michael Coveney

Michael Coveney spent 40+ years in the software analytic business with a focus on transforming the planning, budgeting, forecasting and reporting processes. He has considerable experience in the design and implementation of Business Analytic systems with major organisations throughout the world. He is a gifted conference speaker and author where his latest book ‘Budgeting, Forecasting and Planning In Uncertain Times’is published by J Wiley. His articles have also appeared on www.fpa-trends.com, that encourages innovation in FP&A departments.