Topics
37. How the "work" of the Support dpts affects the Cost Accounting Settings
The choice of a cost accounting system is made, most times, just by turning the attention to the highest number possible of the accounts to be considered and/or by increasing the number of the costing centers and focusing on factors that in the eyes of the management accountant/controller improve the accuracy of the product costing and of the department efficiency evaluation.
As a matter of fact, there are other aspects at issue that should be considered with regard to both the decision making support function and the internal factors, such as motivation and fairness, that at the end affect the efficiency of the single business units and of the whole business.
Before to start, a step “back” is needed to make the dissertation more clear and focused.
There are three ways of tracing the indirect costs to the products/services/projects (from now on for short product/products):
- Job costing when direct costs by product and the overheads are allocated directly to the products (the latter step on the basis of some cost driver)
- Operation costing when the direct costs are imputed to the products and the overheads to intermediate objects (departments or activities )to which charging the overheads on the basis of some cost drivers and then imputing the intermediate object costs to the product, here too, on the basis of given drivers.
- Process costing when the allocation phase considers intermediate objects (departments or activities) to which charging both the traditional direct costs and the overheads on the basis of some cost drivers and then imputing the intermediate object costs to the product on the basis of given drivers.
Having said that, the implications mentioned in the lines above are good for the case of the departmental approach in particular when there are Service (Support) costing centers that deliver their work to the Production (Primary) ones and, even more, this Service centers work also between them, generating the so-called reciprocal flows.
What are the reciprocal flows like?
As we know, inside a large organization there are the primary units (departments) that manufacture or, in the service industry, work the final output that goes into the customers’ hands and the secondary units (called in some cases service ones) that support the formers by giving them their services.
In many cases the latters, for short Service dpts, work just to the primary units, for short Production dpts., and no issue arises about the accuracy of the allocation phase of their costs to the Production dpts but for the traditional matters about the right cost drivers.
In many other cases the Service dpts give their “output” not only to the Production ones but also to other Service ones of the firm, receiving also the work from the latters.
These are the reciprocal flows
Right there we could encounter some important points at issue to be discussed about the most suitable cost allocation method to be made use of.
Which points are we talking about?
1. The most accurate product costing method that affects the decision making about some issues such as profitability analyses and operating decisions concerning the product lines to prefer and the pricing in particular when the businesses adopt the mark-up technique on the full cost.
2. Efficiency evaluation of the depatments that are assessed on the basis both of the costs incurred internally and of the costs allocated to them from external sources and for the services received.
We’ll see how many cost allocation methods are concerned and how the cost amount of the single cost objects varies as a result.
Three methods three different results
Let’s make the example of a manufacturing firm using the departmental approach for the overheads whilst the direct costs by are traced to the products (that is the Operation costing).
Input Data
N. 2 Products: A, B
N. 2 Service Dpts: Alfa, Gamma
N. 2 Production Dpts: 1, 2
In this structure one of the main characteristics that makes hard the choice of the kind of the departmental cost allocation method is that the Service dps deliver their output not only to the Production Dpts but as well to each other.
Here is the flow percentage of their services as it is measured with reference to the labor hours worked
Table 1 - Percent distribution of the Service dpt Labor Hours
|
Alfa |
Gamma |
Prod dpt 1 |
Prod dpt 2 |
Total |
Serv dpt Alfa |
- |
30% |
35% |
35% |
100% |
Serv dpt Gamma |
15% |
- |
35% |
50% |
100% |
The first phase is the result from the attribution of the direct costs by dpt (for instance dedicated equipment depreciation, labor..) and of the allocation of the overheads of the whole firm to the dpts concerned.
Table 2 - Total overheads by industrial dpts after the first phase – January 2019
|
Serv dpt Alfa |
Serv dpt Gamma |
Prod dpt 1 |
Prod dpt 2 |
Total |
Total Overheads $ |
70,000 |
90,000 |
260,000 |
290,000 |
710,000 |
The following steps consist of allocating the costs of the Service dpts to the Production ones according to the percentage of the respective Labor hours in proportion to the total Labor Hours worked by the same Service dpts and then attributing the costs of each Production dpt to the Products A and B.
The latter step is made by using, as a cost driver, the Machine Hours for the Production dpt 1 and the Labor Hours for Production dpt 2.
Table 3 – Percent allocation of the Production dpt overheads to the Products A and B
|
Product A |
Product B |
Product. Dpt 1 - Mach Hours |
30% |
70% |
Product. Dpt 2 - Labor Hours |
40% |
60% |
All of these phases following the first one, giving the results of table 2, can be carried out by using three methods, each of them bringing about different distributions of the overheads allocated to departments and products.
1) Direct Method
According to this method all the costs incurred and allocated to the Service dpts (see table 2) are in their turn allocated to the Production dpts just by taking into account for the calculation the respective percentage of the cost driver (in our example Labor Hours – see table 1) and ignoring the reciprocal work flows between the Service dpts.
After that the final costs, this way calculated, of the Production dpts are attributed to the Products according to the percentage of the cost driver chosen (see table 3).
The ending step is common to the other two steps remaning (Step Method and Reciprocal one)
Without showing the single calculations, in order to make the dissertation as fluid as possible and since the goal is highlighting the strategic side of the matter and not to be a manual of maths, here are the results concerning each Production dpt and the final products.
Table 4 - Direct Method - Total Overheads to the Production Dpts
Cost Source |
Product. dpt 1 |
Product. dpt 2 |
Total |
Serv dpt Alpha |
35,000 |
35,000 |
70,000 |
Serv dpt Gamma |
37,059 |
52,941 |
90,000 |
First Allocation Costs |
260,000 |
290,000 |
550,000 |
Total |
332,059 |
377,941 |
710,000 |
Table 5 - Direct Method - Total Overheads to the Product A and B
Product dpts |
Product A |
Product B |
Total |
Product. dpt 1 |
99,618 |
232,441 |
332,059 |
Product. dpt 2 |
151,176 |
226,765 |
377,941 |
Total |
250,794 |
459,206 |
710,000 |
2) Step Method
The workings of this way require that the costs of a Service dpt (in our example Service dpt Alpha), resulting from the first phase, to be allocated to the other Service dpts (in our example just one - Gamma) and Production ones on the basis of the respective percentage of the cost driver (in our example Labor Hours – see table 1).
Then the costs this way calculated of the other Service dpts are attributed to the Production ones (by taking into account for the calculation the respective percentage of the cost driver – see table 1 ) that in their turn are imputed to the products according to the percentage of the cost driver chosen (see table 3).
Here are the results concerning each Production dpt and the final Products.
Table 6 – Step Method - Total Overheads to the Production Dpts
Cost Source |
Product. dpt 1 |
Product. dpt 2 |
Total |
Serv dpt Alpha |
24,500 |
24,500 |
49,000 |
Serv dpt Gamma |
45,706 |
65,294 |
111,000 |
First Allocation Costs |
260,000 |
290,000 |
550,000 |
Total |
330,206 |
379,794 |
710,000 |
Table 7 – Step Method - Total Overheads to the Product A and B
Product dpts |
Product A |
Product B |
Total |
Product. dpt 1 |
99,062 |
231,144 |
330,206 |
Product. dpt 2 |
151,918 |
227,876 |
379,794 |
Total |
250,980 |
459,020 |
710,000 |
Please Notice how the results of both tables differ from those of the Direct Method.
3) Reciprocal Method
Through this method the costs of the Service dpts. resulting from the first phase are calculated again taking into account the percentages of reciprocal work flows ( 30% from Alpha to Gamma and 15% from Gamma to Alpha), by an equation system (you can use also some Excel Tools).
The costs this way achieved of the Service dpts are attributed to the Production ones on the basis of the respective percentage of the cost driver (in our example Labor Hours – see table 1) that in their turn are imputed to the products according to the percentageof the cost driver chosen (see table 3).
Here are the results concerning each Production dpt and the final Products.
Table 8 – Reciprocal Method - Total Overheads to the Production Dpts
Cost Source |
Product. dpt 1 |
Product. dpt 2 |
Total |
Serv dpt Alpha |
30,602 |
30,602 |
61,204 |
Serv dpt Gamma |
40,681 |
58,115 |
98,796 |
First Allocation Costs |
260,000 |
290,000 |
550,000 |
Total |
331,283 |
378,717 |
710,000 |
Table 9 – Reciprocal Method - Total Overheads to the Product A and B
Product dpts |
Product A |
Product B |
Total |
Product. dpt 1 |
99,385 |
231,898 |
331,283 |
Product. dpt 2 |
151,487 |
227,230 |
378,717 |
Total |
250,872 |
459,128 |
710,000 |
Please Notice how the results of both tables differ from those of the Direct Method and the Step Method
For instance, let’s take into consideration the differences between the cost Allocated to the Production Dpts following the application of all of three methods.
Table 10 – Differences in the Total Production dpt Costs
Methods |
Product dpt 1 - 2 |
Direct |
- 45,882 |
Step |
- 49,588 |
Reciprocal |
- 47,434 |
The variances amongst these differences concern only one month and the amount relative small could deceive about the usefulness of the points at issue.
If you consider a larger period or if the costs allocated to the Service dps are higher than our example ones, it’s easily understandable how these questions can cause conflicts between managers when their performances are evaluated on the basis of the full costs absorbed from their production dpts.
Furthermore, when the criteria of allocation aren’t accurate because they don’t consider the real consumption of the Service dpt work and its direction, achievable only through the most accurate system (the reciprocal method), some disincentives to put under control the requests of the Service work from the managers that take advantage of this misallocation of the overheads can result.
The difference of the cost allocation concerns also the costing of the Products and as a result all the profitability analyses, above all those involving the long term, are wrong and can lead to erroneous decisions,
Not to say the consequences from an erroneous pricing of the products based on the full cost.
For more in-depth analysis about the arguments here dealt with, you can reach out to thestrategiccontroller.com on page Contacts or on carlo.attademo@libero.it
36. Only Just In Time?
In the past article, n 31, dedicated to the Capacity I always referred only to the internal capacity of a business.
As a matter of fact a firm is part of a supply chain and works with upstream and downstream partners to provide products and services to the customers, trying to meet their expectations and requirements.
That’s why many decisions should be focused also on the whole
supply chain issues and features.
Amongst those decisions, I want to write about Capacity matters and Just-in-Time systems.
Let’s start with the formers.
In case of an unexpected and great surge in the demand for a product/component/service the most natural reaction could be resorting to an external supplier.
In many cases, above all when a capacity reporting system is in place, the managers can become aware of the idle or non productive capacity within the whole supply chain and verify to be able to manufacture/achieve that missing “part” internally.
When is that possible?
It’s possible when the whole chain is flexible and the information system is well-integrated so that all the signals concerning the available capacity and where it’s available within the supply chain are highlighted in a fast way.
In this context the needed level of inventory could be lowered because of a faster response time to the customer.
The benefits of this aspect are clear and can be embedded for the manufacturing firms also into the issue above highlighted, the JUST-IN-TIME (JIT) manufacturing system, that can be adopted when the supply chain has the features just listed (flexibility and availability of a capacity reporting).
The classical benefits resulting from the reduction of the inventory add to the lowering of the risk for the damaged and obsolete goods.
In order to make the dissertation more clear and fluid, some JIT concepts are now recalled.
JIT (Just In Time)
It’s the manufacturing system that has as a main feature that no job/productive activity gets started in the manufacturing dpt if no order is received by the customers.
The latter term includes both internal and external customers.
The result of this policy is the low level of the inventory that serves as a buffet for meeting the requests from the customers as fast as possible.
The logical consequence is that the quality of the products and processes must be lifted to the maximum level in order to reduce the risk of the delay of the delivery and the return of defective products.
In other terms, the JIT adoption as well as the existence of a supply chain flexible and well-integrated are particularly fit for the businesses that adopt a differentiation strategy in comparison with its competitors.
Benefits of the JIT:
1) Direct:
- Elimination of the investments into fixed assets not required.
All of those assets should be tied up, that is they cannot be used for alternative purposes.
- Reduction in the labor costs associated to the holding and recording of the inventory as well as in the information system costs.
2) Indirect:
- All the advantages following the increase in the quality (here is the differentiation strategy) of the products/processes such as a higher profitability because of the increased sales and of the reduction in the manufacturing quality costs like those related to the waste, scrap and spoilage management and reworking of the defective products.
- As said before the response time to the customer needs to be strictly monitored producing as a result (when this element is put rigorously in place) a greater customer satisfaction and a following larger market share and sale level.
Costs of the JIT:
1) Training of the personnel involved and settings of a new information-monitoring system that includes also a range of well-suited nonfinancial indicators intended to pinpoint defects, waste, errors and delays.
2) Reconfiguration of the physical (tangible) assets (in some cases additional investments are needed) concerned that should have the right layout to allow the best workings of the new system.
In setting the JIT system of course you should take into account the main features of the industry concerned and the stage that industry is in.
For an in-depth analysis in this regard, you can contact www.thestrategiccontroller.com
Again Capacity Issue
Let’s go back to the capacity issue about the supply chain.
When we talk about the high quality of the products/processes we refer also to this issue.
In fact, if the products received from an upstream partner are not “good” enough, this creates nonproductive capacity inside the receiving organization in order to deal with the noncompliance related issues.
In other terms the real workings of the supply chain can have a negative impact on the best use of the capacity.
In the same way the lack of the flexibility of the supply chain has a negative impact also following a decision made by a downstream partner that creates an important surge in demand that the suppliers are not able to meet in due time.
What do we mean by due time?
The fact that the prior high demand got for the product involved is matched by the output received from the suppliers only when the same demand decreases so that excess stocks have been created.
These stocks should be returned to the suppliers that should level their output downwards again, creating as a result nonproductive time when forced to set up the machines and structure once more.
That’s just one more example of how the supply chain is a key factor to being considered in many decisions that impact the profitability of each of the firms concerned and of how flexible it should be especially in times and industries of great uncertainty.
35. The "Way" to spot the real profitability by product
How many times you are held responsible for the decisions that have impact on the medium-long term, such as the investments in asset-related resources, pricing, inventory evaluations, choice of the best customers on which to focus the business efforts and so on....!!
In order to make the best decisions you made use of the traditional profitability analyses that look over each product/service/project/customer and on the basis of their results you choose the best actions.
BUT when you analyse the results, you see that they are different from what you estimated.
Why?
One of the possible explanations on the cost side is that you didn’t consider the very nature of your business indirect activities and the way they consume resources and generate as a result costs.
In fact, when they are so different according to the product/service/project/ customer or whatever cost object, you cannot allocate them always on the basis of some sort of volume measure (mach. hrs, direct labour hrs, output units...) because the factor that impact their costs (cost driver) is of another kind.
In other terms, in some cases if you persist on using the volume-based costing methods, the profitability order of your cost objects and on which you base your decisions is WRONG!
Then, What Better Way to show this concept than numerical examples!!?
Let’s guess a manufacturing firm that makes 3 products and uses the Machine Hours as a driver to allocate the Factory Overheads to the products.
This firm has the Engineering DPT that works in continuous interaction with the Manufactuting one and for this reason has been considered like an industrial responsibility center.
The same settings have been chosen for Part Purchasing One as a result also of the JIT (Just In Time) policy.
When it is time to budget the Industrial Profit, the Controlling Manager makes the following steps on the basis of this inputs and calculations.
Table 1 - Input Data
|
Product A |
Product B |
Product C |
Output Units |
10,000 |
30,000 |
20,000 |
Price $ |
500 |
350 |
440 |
Direct Resources $: |
|
|
|
Labour |
200 |
150 |
180 |
Materials |
100 |
70 |
100 |
Machine Hours per Unit |
6 |
4 |
5 |
Total of Machine Hours |
60,000 |
120,000 |
100,000 |
VOLUME-BASED COSTING OVERHEAD RATE
Total Budgeted Factory Overheads: $ 4,750,000
Overhead Rate: 4,750,000/ Total Business Machine Hours = 4,750,000/280,000 = $ 16.964per MH
After Multiplying the Overhead rate by the Mach. Hrs per product as per the table 1 and dividing the results by the Output Units by product , we have the following:
Table 2 - Factory Overheads per product as per Volume-Based Costing
|
Product A |
Product B |
Product C |
Output Units |
10,000 |
30,000 |
20,000 |
Total of Machine Hours |
60,000 |
120,000 |
100,000 |
Overheads |
1,017,857 |
2,035,714 |
1,696,429
|
Overheads per Unit |
101.79 |
67.86 |
84.82 |
At this point we can draw up an Industrial Profit Statement that yields this profitability order:
- PRODUCT A
- PRODUCT B
- PRODUCT C
Table 3 - Industrial Profit by product unit according to the Volume-Based Costing
|
Product A |
Product B |
Product C |
Unit Price |
500 |
350 |
440 |
|
|
|
|
Unit Manufacturing Cost: |
|
|
|
Labour |
200 |
150 |
180 |
Materials |
100 |
70 |
100 |
Factory Overheads |
101.79 |
67.86 |
84.82 |
Total Cost per unit |
401.79 |
287.86 |
364.82 |
|
|
|
|
Industrial Profit |
98.21 |
62.14 |
75.18 |
On the other side the Controlling Manager, aware that in the past reference period the results were so different from the forecast, decides to make his calculations based on the Activity Approach and his favour, for a start, goes to the traditional model, that is what isn’t linked to customer-driven value model (whose dissertation is on page Shop of this website).
ACTIVITY-BASED COSTING SYSTEM
He is able with the help of the technical staff to understand that not all the activities change their costs when the volume measure chosen previously changes, the number of the machine hours.
To be more precise, he realizes that some of them change their costs (linked to the resource consumption they measure) at some product group change.
For instance, the setup activities are carried out without a direct proportion to the number of the output units or machine hrs to be worked and, as a result, the factor that affects their costs is the number of setups.
This kind of activities are called either Group-Level ones or Batch-level Activities.
He becomes aware also that the Purchasing activity of Parts is linked to the models per each product not to any of the volume measures and that the cost driver is the number of models.
In other terms, the overhead rate for these kind of activities must be calculated with reference to the setup and model number and then allocated to the different products according to the respective number (table 6).
There are also other activities that concern the whole business and are related just to its “existence”, called Facility ones, that shouldn’t be allocated to the products.
Here are the most important steps.
Table 4 - Budgeted overheads by Activity, Activity Cost Drivers
|
Costs $ |
Activity Cost Driver |
Activ Cost Driver: Prod. A |
Activ Cost Driver: Prod. B |
Activ Cost Driver: Prod. C |
Total of Activ. Cost Drivers |
Engineering |
450,000 |
Engin. hrs |
10,000 |
15,000 |
11,000 |
36,000 |
Part Puchasing |
500,000 |
Num. of models |
6 |
5 |
4 |
15 |
Setups |
800,000 |
Num. of setups |
400 |
120 |
130 |
650 |
Manufact. |
3,000,000 |
Mach Hrs |
60,000 |
120,000 |
100,000 |
280,000 |
|
|
|
|
|
|
|
Total |
4,750,000 |
|
|
|
|
|
Table 5 - Calculation Overhead rate by Activity
|
Costs $ |
Total of Activ. Cost Drivers |
Overhead rate by Activity |
Engineering |
450,000 |
36,000 |
12,5 |
Part Puchasing |
500,000 |
15 |
33,333,33 |
Setups |
800,000 |
650 |
1,230,77 |
Manufacturing |
3,000,000 |
280,000 |
10,7143 |
|
|
|
|
Total |
4,750,000 |
|
|
After Multiplying the Activity Overhead rates by the Activity cost driver level per product as per the table 5, we have the following:
Table 6 - Overheads by Product according to ABC
|
Overhead rate by Activity |
Product A |
Product B |
Product C |
Engineering |
12,5 |
125,000 |
187,500 |
137,500 |
Part Puchasing |
33,333,33 |
200,000 |
166,667 |
133,333 |
Setups |
1230,77 |
492,308 |
147,692 |
160,000 |
Manufacturing |
10,714 |
642,856 |
1,285,714 |
1,071,430 |
|
|
|
|
|
Total overh. Per Product |
|
1,460,164 |
1,787,573 |
1,502,263 |
|
|
|
|
|
Ouput Units |
|
10,000 |
30,000 |
20,000 |
|
|
|
|
|
Eng. Costs per Unit |
|
12,50 |
6,25 |
6,88 |
Part Purch. Costs per Unit |
|
20,00 |
5,56 |
6,67 |
Setup Costs per Unit |
|
49,23 |
4,92 |
8,00 |
Manuf. Cost per Unit |
|
64,29 |
42,86 |
53,57 |
|
|
|
|
|
Overheads per unit |
|
146,02 |
59,59 |
75,12 |
Before to make a comparison between Volume-Based and Activity Based Full Costing, I find it useful to point out two aspects of the ABC, to make this dissertation more clear:
1) The costs of the Activities are calculated by allocating the resource costs by using the most suitable drivers that express the demand for the resource from the activities,
This step has been omitted to make the article more fast and clear and because the description of all the steps of the ABC is not the purpose of this article, whose goal is highlighting the strategic aspects.
2) The “inventors” of the ABC assign the resource costs directly to the activities through a cross-departmental approach, by identifying some activity pools, each of them having the same cost driver for the activities included and whose costs are then allocated to the cost objects.
That is a different way from the Volume-Based Costing that as a first step of the overhead allocation passes through each costing center.
In my opinion, the mapping and the evaluation of the activities per costing center is in any case important to holding the respective manager accountable for the center efficiency and for a fairer allocation of the indirect costs that should express a better basis to the manager performance evaluation purposes.
Now we can proceed to the comparison between the costing methods.
Table 7 – Comparison between the two Industrial Profits by product unit
|
Volume-Based Full Costing |
ABC |
||||
|
Product A |
Product B |
Product C |
Product A |
Product B |
Product C |
Price |
500 |
350 |
440 |
500 |
350 |
440 |
|
|
|
|
|
|
|
Unit Manufacturing Cost: |
|
|
|
|
|
|
Labour |
200 |
150 |
180 |
200 |
150 |
180 |
Materials |
100 |
70 |
100 |
100 |
70 |
100 |
Factory Overheads |
101,79 |
67,86 |
84,82 |
146,02 |
59,59 |
75,12 |
Total Cost per unit |
401,79 |
287,86 |
364,82 |
446,02 |
279,59 |
355,12 |
|
|
|
|
|
|
|
Industrial Profit |
98,21 |
62,14 |
75,18 |
53,98 |
70,41 |
84,88 |
At this point we see the profitability order has been nearly inverted by the ABC evaluation.
- PRODUCT C
- PRODUCT B
- PRODUCT A
WHY?
Because of the change to the criteria that apply the causality principle on the basis of which the real causes of the resource absorption are considered when attributing the overheads to the product families via activities.
With the Volume-Based method the overheads are allocated through a single cost driver, the mach. hrs in this case, and this brings about the so-called cross subsidization, that is the product that recorded the higher number of that volume measure is charged with the heaviest burden of indirect costs.
In other terms, no realistic difference in the resource consumption by the activities and then in the activity consumption by the products is considered, differently from the ABC.
What are the implications of the use of the Volume-Based Costing when the activities vary largely according to the product?
Here are some examples:
- Investment decisions into products that, by using Volume-Based costing, are considered erroneously the most profitable or to an extent not true at all.
- Erroneous pricing and that is vital because you set a price not assuring a good margin to cover as you want all the costs estimated.
In both the cases, you achieve a smaller area of profit for the firm.
- Erroneous Inventory evaluation.
- Other internal consequencies are the behaviour of some managers that aren’t very motivated to control some kind of costs because they are held responsible just on the basis of a volume measure and the result is an increase in the whole business overheads
Just as an example, a dpt manager could require several maintenance interventions and not caring about the respective costs charged to him because in any case the allocation is made on the basis of the mch. hrs or labour hours over which he has not much influence.
On the other side, there could be some other dpt managers requiring very few maintenance interventions charged with a similar amount of those costs!
Are there ways to see misleading overhead allocation prior to implementing the ABC?
Yes, there are some methods based on the deep knowledge of the internal and external activities and in particular on the verification of the presence of specific categories of business activities and of their extent that pave the way to the ABC application.
Which are those activity categories?
if you are interested to know more about it, Page Contacts on www.thestrategiccontroller.com
One of the next publications on this website is due to be “The Way to spot the real profitability by customer”.
34. Risk Management: some important metrics
The risk is the main factor taken into account by a skilled CFO and Project manager in the fulfilment of their tasks and that happens both when planning the future business activities/projects and over their execution.
This article focuses on some cases of the risk evaluation and its indices made use of concerning the project control and management.
The main quantitative technique used for undertaking a project also in comparison with other alternative project is without any doubt the Bayes’ Theorem.
It assigns to every project/investment decision the Weighted EMV (Expected Monetary Value), that is the sum of the expected monetary values of each scenario/risky event (that is each estimated profit/loss multiplied by its respective probability of occurring).
If you want to see better the above concept, you can look at the article n. 18 of this website under the paragraph “Scenario Analysis”
However, some other quantitative “factors” can be taken into account and not only about the decision whether to get a project going or not.
Why am I writing about quantitative approaches?
I am doing that because there are some other methods, called qualitative, that analyse the projects and the related risks without reference to any kind of metrics, that are more easy to be understood but at the same time with a strong degree of subjectivity.
In this article I will focus on the quantitative approaches that take into account the financial side and you will see both how these metrics are calculated and how they can be used.
Let’s start with the case when you want to know the exposure of the project/activity to the risk, meant by as a maximum effort to be faced if the risky event occurs.
It’s obvious, that there isn’t a one-size-fits-all value for all the projects or industries. A reference term must be identified which the maximum expected monetary value should be compared to.
This term is the BAC (Budget at Completion), that is the sum of all the project costs estimated at the initial stage of its life.
After this foreword, here is the Risk Exposure Level (REL)
REL = MAX EMV (Expected Monetary Value)/BAC
Let’s guess the maximum loss estimated for the project to be the increase in the procurement costs following the levy of customs duties on the raw materials being imported and fundamental to the execution of the project.
This loss is estimated $ 3,500,000 and the total value of the Project Costs is $10,000,000
REL = 3,500,000/10,000,000 = 0,35
This metric compared to that of other projects could give a preference order with reference to the risk degree if you are in a proposal stage.
Of course there are other factors complementary to that risk preference order made on the basis of the REL in the choice process of the projects.
I want to list just some of these concurrent strategic elements without going into further details in this dissertation:
- Financial standing of the firm and related funding matters important to facing the higher expenses following the potential realization of the risky event.
- Timing of the receipts from the customers.
- Kind of project pricing whether lump sum or mark-up one.
- Inclination to the risk of the top management.
- Compensation criteria (the higher the variable share linked to the project results the higher the number of little risky projects undertaken).
What if a given project has a high REL but at the same time its acceptance is fundamental to the strategy of the company and putting into practice counteractions is too expensive?
The project manager can try to transfer the risk with the MAX EMV to other entity, through for instance an insurance contract or giving that activity related to the risk in subcontract to third parties.
This practice is very advisable when the risky event isn’t under the control of the project manager, such as the macroeconomic elements (inflation trend is an example), but also when it can be contrasted by the responsible of the project and all the related financial assessments are negative.
The REL can be used also in the monitoring phase, that is when the project is being executed and the risk must be kept under control on the basis of new information.
Moreover, the project manager with the help of the project controller could measure the effectiveness of the actions put in place to reduce the risk degree by resorting to a new calculation of the REL for the ongoing projects.
Nonetheless, one thing needs to be specified.
In the latter use I highlight that the effectiveness of the corrective actions should be measured by taking into account the same risk cluster of the initial stage.
In fact, over the life of the project some other risks not considered previously can come up; then for seeing only whether the project manager is doing well, the MAX EMV must be calculated just for the old risky events.
If the ratio is lower than the previous one, the actions carried out are yielding positive effects.
It goes without saying the all the risk events, old and new, must be taken into account for a general attention to the future of the project.
What about the Contingency Funds?
When the risky events are identified by the people responsible for the project and the activities it is broken down, evaluated and taken “inside” the project, that is not transferred to other entities, then the following step is setting some appropriate funds intended for the necessary counteractions.
They are called Contingency Funds and must be included in the well-known Risk Plan.
It can happen at a given Check in Progress that the risk why a Contingency Fund was set vanishes, that is the probability of the realization of the related event doesn’t exist any longer.
In that case, that fund can be used either for setting new funds for unforeseen future risks (project allowances), or for increasing the contingencies against other risks previously considered, or for facing the expenses incurred for risks already realized and not foreseen during the intial stage of the project.
Another potential use of that fund could be the possibility of increasing the ratio that measures the expected project profitability, that is Project Revenues to Project Costs.
This ratio is called K coefficient and it is increased when the Contingency Funds (costs) are diminished.
K coefficient of the project = Project Revenues/Project Costs
A prudent project manager usually prefers the solutions listed before.
After describing the Contingency Funds and their potential utilization, the following step is showing the risk metrics that refer to them.
Risk Tendency Level , that is the ratio between the total of the Contingency Funds available and the Budget at Completion (BAC).
RTL = CFtot/BAC
It shows how much the company is open to spend because of the risk in comparison with the total costs of the project.
This metric goes a long way toward explaning the attention threshold that the company sets on a specific project together with the REL and RAL.
The latter (Risk Acceptance Level) results from the the total of the Contingency Funds available divided by the total of the EMV for a project.
RAL = CFtot/EMV tot
It expresses how much of the estimated risks of the project the company accepts to put on its charge.
33. Another useful "piece" of variance analysis for the BU
In the previous article n. 9 of this website we showed how the Market Share Variance is calculated and its potential use also for performance evaluation purposes of the Sales dpt, in particular when we are dealing with products/services going through their maturity stage.
Now we are going to show another very useful piece of the revenue (contribution margin) variance analyses for the BU.
Summing up, the Sales Variance is broken down into Price Variance and Volume Variance.
The latter is even more broken down into Mix Variance and Quantity Variance.
The last level of the analysis of the Revenue deviation, we also make the same way for the Contribution Margin Variance, causes the Quantity Variance to be divided Into Market Share Variance and Market Size Variance
One consideration must be made: fundamental to this calculation is the inclusion of the all the products of the company that are selling in the same market, making a category of products.
For instance if we manufacture three models of smartphones the reference numbers should be the total of all three products or the average value, depending upon what we are putting into the related “formula”.
In so doing, we’ll see how the Market Size Variance plus the Market Share Variance is the total of the single Quantity Variances of the 3 models.
To be more Clear here is a table with the reference Data:
Table 1
Description |
Actual Unit Sales |
Budgeted Unit Sales |
Budget Market Share |
Budgeted Revenues |
Market |
30,000 |
35,000 |
|
|
Smartphone A |
3,000 |
3,500 |
10% |
650,000 |
Smartphone B |
2,000 |
2,200 |
6.3% |
480,000 |
Smartphone C |
1,500 |
1,800 |
5.1% |
400,000 |
Total/Average Price |
|
|
21.4% |
204 |
Here is the magical equation:
Market Size Variance = (Actual Mkt Size in units – Budgeted Mkt Size in Units) X Budget Market Share X Budget Average Price
$ 218,280 = (30,000 – 35,000) X 21.4% X $ 204
In this example the variance of the Revenues traceable to the variation of the Market Size in comparison with its expected value is Unfavourable since the Actual Size is lower than the Budgeted one.
How can this value be used?
As said above the Market Share Variance, when it is calculated, could be used also as a metric to assess to the performance of the Sales dpt about products that are in a maturiy stage of their life cycle.
In fact, the value of the Revenue Variance traced to the Variation of the Market Share might reflect the ability of our salesmen to sell our products compared to that of the competitors when that product family has been selling long since.
As to the Market Size Deviation, it could serve as a quantifying tool of the trend of the market, since it measures the revenue variance due to the change in the appreciation of the reference customers towards that category our products are a part of.
This figure could be the basis or not (depending on favourable or unfavourable analysis) for some business decisions concerning that market, both investments on asset-related resources and variable resources such as salesmen “fleet”.
It goes without saying that these decisions require that past data analysis like the Market Size Variance should be combined with information concerning the future, for instance about new models to be marketed, also from the competitors, that are exclusive tasks of that Marketing Dpt..
In any case, the purpose of this article is, like the whole website, to highlight that the management control has at its disposal any kind of instruments suitable both to see whether the strategy is successful or not and to define it.