You are on page 1of 74

Forecasting:

Forecasting is studying and telling the result of situations that will happen in the future, and which have an unknown result. It is similar to predicting, but usually forecasting is done with scientific methods. Forecasting can be done at many different things. Forecasting is not very precise science, so results are not always reliable.

Forecasting Introduction Forecasting is the estimation of the value of a variable (or set of variables) at some future point in time. In this note we will consider some methods for forecasting. A forecasting exercise is usually carried out in order to provide an aid to decision-making and in planning the future. Typically all such exercises work on the premise that if we can predict what the future will be like we can modify our behaviour now to be in a better position, than we otherwise would have been, when the future arrives. Applications for forecasting include: inventory control/production planning - forecasting the demand for a product enables us to control the stock of raw materials and finished goods, plan the production schedule, etc Investment policy - forecasting financial information such as interest rates, exchange rates, share prices, the price of gold, etc. This is an area in which no one has yet developed a reliable (consistently accurate) forecasting technique (or at least if they have they haven't told anybody!) Economic policy - forecasting economic information such as the growth in the economy, unemployment, the inflation rate, etc is vital both to government and business in planning for the future.

Think for a moment, suppose the good fairy appeared before you and told you that because of your kindness, virtue and chastity (well - it is a fairy tale) they had decided to grant you three forecasts. Which three things in your personal/business life would you most like to forecast? Personally I would choose (in decreasing order of importance): the date of my death the winning numbers on the next UK national lottery the winning numbers on the UK national lottery after that one

As you can see from my list some forecasts have life or death consequences. Also it is clear that to make certain forecasts, e.g. the date of my death, we could (in the absence of the good fairy to help us) collect some data to enable a more informed, and hence hopefully more accurate, forecast to be made. For example we might look at life expectancy for middle-aged UK male academics (non-smoker, drinker, never exercises). We might also conduct medical tests. The point to emphasise here is that collecting relevant data may lead to a better forecast. Of course it may not, I could have been run over by a car the day after this written and hence be dead already. Indeed on a personal note I think (nay forecast) that companies offering Web (digital) immortality will be a big business growth area in the early part of the 21st century Types of forecasting problems/methods One way of classifying forecasting problems is to consider the timescale involved in the forecast i.e. how far forward into the future we are trying to forecast. Short, medium and long-term are the usual categories but the actual meaning of each will vary according to the situation that is being studied, e.g. in forecasting energy demand in order to construct power stations 5-10 years would be short-term and 50 years would be long-term, whilst in forecasting consumer demand in many business situations up to 6 months would be short-term and over a couple of years longterm. The table below shows the timescale associated with business decisions.
Timescale Type of Examples

decision Short-term Up to 3-6 months Medium-term 3-6 months - 2 years Long-term Above 2 years Operating Tactical Strategic Inventory control Production planning, distribution Leasing of plant and equipment Employment changes Research and development Acquisitions and mergers Product changes

The basic reason for the above classification is that different forecasting methods apply in each situation, e.g. a forecasting method that is appropriate for forecasting sales next month (a shortterm forecast) would probably be an inappropriate method for forecasting sales in five years time (a long-term forecast). In particular note here that the use of numbers (data) to which quantitative techniques are applied typically varies from very high for short-term forecasting to very low for long-term forecasting when we are dealing with business situations. Forecasting methods can be classified into several different categories:

qualitative methods - where there is no formal mathematical model, often because the data available is not thought to be representative of the future (long-term forecasting) regression methods - an extension of linear regression where a variable is thought to be linearly related to a number of other independent variables multiple equation methods - where there are a number of dependent variables that interact with each other through a series of equations (as in economic models) time series methods - where we have a single variable that changes with time and whose future values are related in some way to its past values.

We shall consider each of these methods in turn. Qualitative methods Methods of this type are primarily used in situations where there is judged to be no relevant past data (numbers) on which a forecast can be based and typically concern long-term forecasting. One approach of this kind is the Delphi technique. The ancient Greeks had a very logical approach to forecasting and thought that the best people to ask about the future were supernatural beings, gods. At the oracle at Delphi in ancient Greece questions to the gods were answered through the medium of a woman over fifty who lived apart from her husband and dressed in a maiden's clothes. If you wanted your question answered you had to:

provide some cake; provide an animal for sacrifice; and bathe with the medium in a spring.

After this the medium would sit on a tripod in a basement room in the temple, chew laurel leaves and answer your question (often in ambiguous verse). It is therefore legitimate to ask whether, in the depths of a basement room somewhere, there is a laurel leaf chewing government servant who is employed to forecast economic growth, election success, etc. Perhaps there is! Reflect for a moment, do you believe that making forecasts in the manner used at Delphi leads to accurate forecasts or not?

Recent scientific investigation (New Scientist, 1st September 2001) indicates that the medium may have been "high" as a result of inhaling hydrocarbon fumes, specifically ethylene, emanating from a geological fault underneath the temple. Nowadays the Delphi technique has a different meaning. It involves asking a body of experts to arrive at a consensus opinion as to what the future holds. Underlying the idea of using experts is the belief that their view of the future will be better than that of non-experts (such as people chosen at random in the street). Consider - what types of experts would you choose if you were trying to forecast what the world will be like in 50 years time? In a Delphi study the experts are all consulted separately to avoid some of the bias that might result were they all brought together, e.g. domination by a strong willed individual, divergent (but valid) views not being expressed for fear of humiliation. A typical question might be "In what year (if ever) do you expect automated rapid transit to have become common in major cities in Europe?". The answers are assembled in the form of a distribution of years, with comments attached, and recirculated to provide revised estimates. This process is repeated until a consensus view emerges. Plainly such a method has many deficiencies but on the other hand is there a better way of getting a view of the future if we lack the relevant data (numbers) which would be needed if we were to apply some of the more quantitative techniques? As an example of this there was a Delphi study published in Science Journal in October 1967 which tried to look forward into the future (now, of course, we are many years past 1967 so we can see how well they forecast). Many questions were asked as to when something might happen and a selection of these questions are given below. For each question we give the upper quartile answer, the time by which 75% of the experts believed something would have happened. Automated rapid transit, upper quartile answer 1985, i.e. 75% of the experts asked in 1967 thought that by 1985 there would be widespread automated rapid transit in most urban areas, tell that to anyone who lives in London! Widespread use of sophisticated teaching machines, upper quartile answer 1990, i.e. 75% of the experts asked in 1967 thought that by 1990 there would be widespread use of sophisticated teaching machines, tell that to anyone who works in a UK school/university Widespread use of robot services, upper quartile answer 1995, i.e. 75% of the experts asked in 1967 thought that by 1995 there would be widespread use of robot services

It is clear that these forecasts, at least, were very inaccurate. Indeed looking over the full set of forecasts many of the 25 forecasts made (about all aspects of life/society in the future after 1967) were wildly inaccurate. This brings us to our first key point, we are interested in the difference between the original forecast and the final outcome, i.e. in forecast error. However, back in 1967 when this Delphi study was done, what other alternative approach did we have if we wished to answer these questions? In many respects the issue we need address with regard to forecasting is not whether a particular method gives good (accurate) forecasts but whether it is the best available method - if it is then what choice do we have about using it? This brings us to our second key point, we need to use the most appropriate (best) forecasting method, even if we know that (historically) it does not give accurate forecasts. Regression methods

You have probably already met linear regression where a straight line of the form Y = a + bX is fitted to data. It is possible to extend the method to deal with more than one independent variable X. Suppose we have k independent variables X1, X2, ..., Xk then we can fit the regression line Y = a + b1X1 + b2X2 + ... + bkXk This extension to the basic linear regression technique is known as multiple regression. Plainly knowing the regression line enables us to forecast Y given values for the Xi i=1,2,...,k. Multiple equation methods Methods of this type are frequently used in economic modelling (econometrics) where there are many dependent variables that interact with each other via a series of equations, the form of which is given by economic theory. This is an important point. Economic theory gives us some insight into the basic structural relationships between variables. The precise numeric relationship between variables must often be deduced by examining data. As an example consider the following simple model, let: X = personal income Y = personal spending I = personal investment r = interest rate
(spending a linear function of disposable income) (investment linearly related to the interest rate) (income = spending + investment)

From economic theory suppose that we have


Y = a1 + b1(X-a1) I = a2 + b2r X = Y + I

and the balancing equation where a1,a2,b1,b2 are constants. Here we have 3 equations in 4 variables (X,Y,I,r) and so to solve these equations one of the variables must be given a value. The variable so chosen is known as an exogenous variable because its value is determined outside the system of equations whilst the remaining variables are called endogenous variables as their values are determined within the system of equations, e.g. in our model we might regard the interest rate r as the exogenous variable and be interested in how X, Y and I change as we alter r. Usually the constants a1,a2,b1,b2 are not known exactly and must be estimated from data (a complex procedure). Note too that these constants will probably be different for different groups of people, e.g. urban/rural, men/women, single/married, etc. An example of an econometric model of this type is the UK Treasury model of the economy which contains many variables (each with a time subscript), complicated equations, and is used to look at the effect of interest rate changes, tax changes, oil price movements, etc. For example the UK Treasury equation [New Scientist, 31st October 1993] to predict consumer spending looks like: DlogeCt = -0.018 + 0.0623DDlogeUt 0.0014336loge[(NFWt-1 + GPWt-1)/(Pt-1Yt-1)] + etc where: t = time period (quarter) in question D = change in variable between this quarter and last quarter 0.00448logeCt-1 + 0.004256logeYt-1 +

C = consumer non-durable spending for the quarter in question U = unemployment rate Y = real disposable income adjusted for inflation loss on financial assets P = inflation index for total consumer spending NFW = net financial assets of the personal sector GPW = gross physical wealth of the personal sector

If you click here you will find a model that enables you to play with the UK economy. Historically econometric techniques/methods tend to have large forecast errors when forecasting national economies in the medium-term. However recall one of our key points above: we need to use the most appropriate (best) forecasting method, even if we know that (historically) it does not give accurate forecasts. It can be argued that such techniques are the most appropriate/best way of making economic forecasts. Time series methods/analysis Methods of this type are concerned with a variable that changes with time and which can be said to depend only upon the current time and the previous values that it took (i.e. not dependent on any other variables or external factors). If Yt is the value of the variable at time t then the equation for Yt is Yt = f(Yt-1, Yt-2, ..., Y0, t) i.e. the value of the variable at time t is purely some function of its previous values and time, no other variables/factors are of relevance. The purpose of time series analysis is to discover the nature of the function f and hence allow us to forecast values for Yt. Time series methods are especially good for short-term forecasting where, within reason, the past behaviour of a particular variable is a good indicator of its future behaviour, at least in the shortterm. The typical example here is short-term demand forecasting. Note the difference between demand and sales - demand is what customers want - sales is what we sell, and the two may be different. In graphical terms the plot of Yt against t is as shown below.

The purpose of the analysis is to discern some relationship between the Yt values observed so far in order to enable us to forecast future Yt values. We shall deal with two techniques for time series analysis in detail and briefly mention a more sophisticated method. Moving average One, very simple, method for time series forecasting is to take a moving average (also known as weighted moving average). The moving average (mt) over the last L periods ending in period t is calculated by taking the average of the values for the periods t-L+1, t-L+2, t-L+3, ..., t-1, t so that mt = [Yt-L+1 + Yt-L+2 + Yt-L+3 + ... + Yt-1 + Yt]/L To forecast using the moving average we say that the forecast for all periods beyond t is just mt (although we usually only forecast for one period ahead, updating the moving average as the actual observation for that period becomes available). Consider the following example: the demand for a product for 6 months is shown below calculate the three month moving average for each month and forecast the demand for month 7.
Month Demand (100's) 1 2 3 4 5 6 42 41 43 38 35 37

Now we cannot calculate a three month moving average until we have at least 3 observations i.e. it is only possible to calculate such an average from month 3 onward. The moving average for month 3 is given by:

m3 = (42 + 41 + 43)/3 = 42 and the moving average for the other months is given by: m4 = (41 + 43 + 38)/3 = 40.7 m5 = (43 + 38 + 35)/3 = 38.7 m6 = (38 + 35 + 37)/3 = 36.7 We use m6 as the forecast for month 7. Hence the demand forecast for month 7 is 3670 units. The package input for this problem is shown below.

The output from the package for a three month moving average is shown below.

Choosing between forecasts One problem with this forecast is simple - how good is it? For example we could also produce a demand forecast for month 7 using a two month moving average. This would give the following: m2 = (42 + 41)/2 = 41.5 m3 = (41 + 43)/2 = 42 m4 = (43 + 38)/2 = 40.5 m5 = (38 + 35)/2 = 36.5 m6 = (35 + 37)/2 = 36 Would this forecast (m6 = 3600 units) be better than our current demand forecast of 3670 units? Rather than attempt to guess which forecast is better we can approach the problem logically. In fact, as will become apparent below, we already have sufficient information to make a logical choice between forecasts if we look at that information appropriately.

In an attempt to decide how good a forecast is we have the following logic. Consider the three month moving average given above and pretend for a moment that we had only demand data for the first three months, then we would calculate the moving average for month 3 (m3) as 42 (see above). This would be our forecast for month 4. But in month 4 the outcome is actually 38, so we have a difference (error) defined by:

error = forecast-outcome = 42-38 = 4

Note here that we could equally well define error as outcome-forecast. That would just change the sign of the errors, not their absolute values. Indeed note here that if you inspect the package output you will see that it does just that. In month 4 we have a forecast for month 5 of m4 = 40.7 but an outcome for month 5 of 35 leading to an error of 40.7-35 = 5.7. In month 5 we have a forecast for month 6 of m5 = 38.7 but an outcome for month 6 of 37 leading to an error of 38.7-37 = 1.7. Hence we can construct the table below:
Month Demand (100's) Forecast Error Month Demand (100's) Forecast Error 1 42 1 42 2 41 2 41 3 43 3 43 m2 41.5 -1.5 4 38 m3 42 4 4 38 m3 42 4 5 35 m4 40.7 5.7 5 35 m4 40.5 5.5 6 37 m5 38.7 1.7 7 ? m6 36.7 ?

Constructing the same table for the two month moving average we have:
6 7 37 ? m5 m6 36.5 36 -0.5 ?

Comparing these two tables we can see that the error terms give us a measure of how good the forecasting methods (two or three month moving average) would have been had we used them to forecast one period (month) ahead on the historical data that we have. In an ideal world we would like a forecasting method for which all the errors are zero, this would give us confidence (probably a lot of confidence) that our forecast for month 7 is likely to be correct. Plainly, in the real world, we are hardly likely to get a situation where all the errors are zero. It is genuinely difficult to look at (as in this case) two series of error terms and compare them. It is much easier if we take some function of the error terms, i.e. reduce each series to a single (easily grasped) number. One suitable function for deciding how accurate a forecasting method has been is:

average squared error

The logic here is that by squaring errors we remove the sign (+ or -) and discriminate against large errors (being resigned to small errors but being adverse to large errors). Ideally average squared error should be zero (i.e. a perfect forecast). In any event we prefer the forecasting method that gives the lowest average squared error. We have that for the three month moving average: average squared error = [4 + 5.7 + 1.7]/3 = 17.13 average squared error = [(-1.5) + 4 + 5.5 + (-0.5)]/4 = 12.19 and for the two month moving average: The lower of these two figures is associated with the two month moving average and so we prefer that forecasting method (and hence prefer the forecast of 3600 for month 7 produced by the two month moving average).

Average squared error is known technically as the mean squared deviation (MSD) or mean squared error (MSE). Note here that we have actually done more than distinguish between two different forecasts (i.e. between two month and three month moving average). We now have a criteria for distinguishing between forecasts, however they are generated - namely we prefer the forecast generated by the technique with the lowest MSD (historically the most accurate forecasting technique on the data had we applied it consistently across time). This is important as we know that even our simple package contains many different methods for time series forecasting - as below.

Question - do you think that one of the above forecasting methods ALWAYS gives better results than the others or not? Single exponential smoothing One disadvantage of using moving averages for forecasting is that in calculating the average all the observations are given equal weight (namely 1/L), whereas we would expect the more recent observations to be a better indicator of the future (and accordingly ought to be given greater weight). Also in moving averages we only use recent observations, perhaps we should take into account all previous observations. One technique known as exponential smoothing (or, more accurately, single exponential smoothing) gives greater weight to more recent observations and takes into account all previous observations. Define a constant where 0 <= <= 1 then the (single) exponentially smoothed moving average for period t (Mt say) is given by Mt = Yt + (1- )Yt-1 + (1- )Yt-2 + (1- )Yt-3 + ... So you can see here that the exponentially smoothed moving average takes into account all of the previous observations, compare the moving average above where only a few of the previous observations were taken into account. The above equation is difficult to use numerically but note that:

Mt = Yt + (1- )[Yt-1 + (1- )Yt-2 + (1- )Yt-3 + ...] i.e. Mt = Yt + (1- )Mt-1 Hence the exponentially smoothed moving average for period t is a linear combination of the current value (Yt) and the previous exponentially smoothed moving average (Mt-1). The constant is called the smoothing constant and the value of reflects the weight given to the current observation (Yt) in calculating the exponentially smoothed moving average Mt for period t (which is the forecast for period t+1). For example if = 0.2 then this indicates that 20% of the weight in generating forecasts is assigned to the most recent observation and the remaining 80% to previous observations. Note here that Mt = Yt + (1- )Mt-1 can also be written Mt = Mt-1 - (Mt-1 - Yt) or current forecast = previous forecast - (error in previous forecast) so exponential smoothing can be viewed as a forecast continually updated by the forecast error just made. Consider the following example: for the demand data given in the previous section calculate the exponentially smoothed moving average for values of the smoothing constant = 0.2 and 0.9. We have the following for = 0.2. M1 = Y1 = 42 (we always start with M1 = Y1) M2 = 0.2Y2 + 0.8M1 = 0.2(41) + 0.8(42) = 41.80 M3 = 0.2Y3 + 0.8M2 = 0.2(43) + 0.8(41.80) = 42.04 M4 = 0.2Y4 + 0.8M3 = 0.2(38) + 0.8(42.04) = 41.23 M5 = 0.2Y5 + 0.8M4 = 0.2(35) + 0.8(41.23) = 39.98 M6 = 0.2Y6 + 0.8M5 = 0.2(37) + 0.8(39.98) = 39.38 Note here that it is usually sufficient to just work to two or three decimal places when doing exponential smoothing. We use M6 as the forecast for month 7, i.e. the forecast for month 7 is 3938 units. We have the following for = 0.9. M1 = Y1 = 42 M2 = 0.9Y2 + 0.1M1 = 0.9(41) + 0.1(42) = 41.10 M3 = 0.9Y3 + 0.1M2 = 0.9(43) + 0.1(41.10) = 42.81 M4 = 0.9Y4 + 0.1M3 = 0.9(38) + 0.1(42.81) = 38.48 M5 = 0.9Y5 + 0.1M4 = 0.9(35) + 0.1(38.48) = 35.35 M6 = 0.9Y6 + 0.1M5 = 0.9(37) + 0.1(35.35) = 36.84 As before M6 is the forecast for month 7, i.e. 3684 units. The package output for =0.2 is shown below.

The package output for =0.9 is shown below.

In order to decide the best value of (from the two values of 0.2 and 0.9 considered) we choose the value associated with the lowest MSD (as above for moving averages). For =0.2 we have that MSD = [(42-41)+(41.80-43)+(42.04-38)+(41.23-35)+(39.98- 37)]/5 = 13.29 MSD = [(42-41)+(41.10-43)+(42.81-38)+(38.48-35)+(35.35- 37)]/5 = 8.52 For =0.9 we have that Note here that these MSD values agree (to within rounding errors) with the MSD values given in the package output above. Hence, in this case, =0.9 appears to give better forecasts than =0.2 as it has a smaller value of MSD.

Above we used MSD to reduce a series of error terms to an easily grasped single number. In fact functions other than MSD such as: and

MAD (mean absolute deviation) = average | error | bias (mean error) = average error, also know as Cumulative Forecast Error

exist which can also be used to reduce a series of error terms to a single number so as to judge how good a forecast is. For example, as can be seen in the package outputs above, the package gives a number of such functions, defined as:

In fact methods are available which enable the optimal value of the smoothing constant (i.e. the value of which minimises the chosen criteria of forecast accuracy, such as mean squared deviation (MSD)) to be easily determined. This can be seen below where the package has calculated that the value of which minimises MSD is =0.86 (approximately).

Note here that the package can be used to plot both the data and the forecasts as generated by the method chosen. Below we show this for the output above (associated with the value of which minimises MSD of 0.86.

Note here that the choice of criterion can have a large effect on the value of e.g. for our example the value of which minimises MAD is =0.59 (approximately) and the value of which minimises bias is =1.0 (approximately). To illustrate the change in MAD, bias and MSD as changes we graph below MAD and bias against the smoothing constant ,

and below MSD against .

Below we graph the value of the forecast against . One particular point to note is that, for this example, for a relatively wide range of values for the forecast is stable (e.g. for 0.60 <= <= 1.00 the forecast lies between 36.75 and 37.00). This can be seen below - the curve is "flat" for high values.

Note here that the above graphs imply that in finding a good value for the smoothing constant it is not usually necessary to calculate to a very high degree of accuracy (e.g. not to within 0.001 for example). More advanced time series forecasting Time series forecasting methods more advanced than those considered in our simple package do exist. These are based on AutoRegressive Integrated Moving Average (ARIMA) models. Essentially these assume that the time series has been generated by a probability process with future values related to past values, as well as to past forecast errors. To apply ARIMA models the time series needs to be stationary. A stationary time series is one whose statistical properties such as mean, variance and autocorrelation are constant over time. If the initial time series is not stationary it may be that some function of the time series, e.g. taking the differences between successive values, is stationary. In fitting an ARIMA model to time series data the framework usually used is a Box-Jenkins approach. It does however have the disadvantage that whereas a number of time series techniques are fully automatic, in the sense that the forecaster has to exercise no judgement other than in choosing the technique to use, the Box-Jenkins technique requires the forecaster to make judgements and consequently its use requires experience and "expert judgement" on the part of the forecaster. Some forecasting packages do exist that make these "expert choices" for you.

Delphi Forecasting

1. Introduction

Delphi forecasting is a non-quantitative technique for forecasting. It draws its name from the Oracles of Delphi, which in Greek Antiquity advised people based on intuition and common sense. Unlike many other methods that use so-called objective predictions involving quantitative analysis, the Delphi method is based on expert opinions. It has been demonstrated that predictions obtained this way can be at least as accurate as other procedures. The essence of the procedure is to use the assessment of opinions and predictions by a number of experts over a number of rounds in carefully managed sequences. One of the most important factors in Delphi forecasting is the selection of experts. The persons invited to participate must be knowledgeable about the issue, and represent a variety of backgrounds. The number must not be too small to make the assessment too narrowly based, nor too large to be difficult to coordinate. It is widely considered that 10 to 15 experts can provide a good base for the forecast.
2. Procedure

The procedure begins with the planner/researcher preparing a questionnaire about the issue at hand, its character, causes and future shape. These are distributed to the respondents separately who are asked to rate and respond. The results are then tabulated and the issues raised are identified. The results are then returned to the experts in a second round. They are asked to rank or assess the factors, and justify why they made they their choices. During a third or subsequent rounds their ratings along with the group averages, and lists of comments are provided, and the experts are asked to re-evaluate the factors. The rounds would continue until an agreed level of consensus is reached. The literature suggests that by the third round a sufficient consensus is usually obtained. The procedure may take place in many ways. The first step is usually undertaken by mail. After the initial results are obtained the subsequent round could be undertaken at a meeting of experts, assuming it would be possible to bring them together physically. Or, the subsequent rounds could be conducted again by mail. E-Mail has greatly facilitated the procedure. The basic steps are as follows:
1. Identification of the problem. Researcher identifies the problem for which some predictions are required, e.g. what is the traffic of port x likely to be in 10 years time. Researcher prepares documentation regarding past and present traffic activity. Questionnaire is formulated concerning future traffic estimates and factors that might influence such developments. A level of agreement between the responses is selected, i.e. if 80% of the experts can agree on a particular traffic prediction. 2. Selection of experts. In the case of a port scenario this might include terminal managers, shipping line representatives, land transport company representatives, intermediaries such as freight forwarders, and academics. It is important to have a balance, so that no one group is overly represented. 3. Administration of questionnaire. Experts are provided with background documentation and questionnaire. Responses are submitted to researcher within a narrow time frame. 4. Researcher summarizes responses. Actual traffic predictions are tabulated and means and standard deviations calculated for each category of cargo as in the case of a port traffic prediction exercise. Key factors suggested by experts are compiled and listed.

5. Feedback. The tabulations are returned to the experts, either by mail or in a meeting convened to discuss first round results. The advantage of a meeting is that participants can confront each other to debate areas of disagreement over actual traffic predictions or of key factors identified. The drawback is that a few individuals might exert personal influence over the discussion and thereby sway outcomes, a trend that the researcher must be alert to and seek to mitigate. Experts are invited to review their original estimates and choices of key factors in light of the results presented, and submit a new round of predictions. 6. These new predictions are tabulated and returned to the experts either by mail or immediately to the meeting, if the level of agreement does not meet the pre-determined level of acceptance. The specific areas of disagreement are highlighted, and the experts are again requested to consider their predictions in light of the panels overall views. 7. The process is continued until the level of agreement has reached the pre-determined value. If agreement is not possible after several rounds, the researcher must terminate the process and try to pinpoint where the disagreements occur, and utilize the results to indicate specific problems in the traffic prediction process in this case. This method could be applied in a classroom setting, with students serving as experts for a particular case study. The traffic at the local airport or port might be an appropriate example. On the basis of careful examination of traffic trends and factors influencing business activity, the class could be consulted to come up with predictions that could then be compared with those of some alternate method such as trend extrapolation. Market Research
Definition: The process of gathering, analyzing and interpreting information about a market, about a product or service to be offered for sale in that market, and about the past, present and potential customers for the product or service; research into the characteristics, spending habits, location and needs of your business's target market, the industry as a whole, and the particular competitors you face

Accurate and thorough information is the foundation of all successful business ventures because it provides a wealth of information about prospective and existing customers, the competition, and the industry in general. It allows business owners to determine the feasibility of a business before committing substantial resources to the venture. Market research provides relevant data to help solve marketing challenges that a business will most likely face--an integral part of the business planning process. In fact, strategies such as market segmentation (identifying specific groups within a market) and product differentiation (creating an identity for a product or service that separates it from those of the competitors) are impossible to develop without market research. Market research involves two types of data:

Primary information. This is research you compile yourself or hire someone to gather for you. Secondary information. This type of research is already compiled and organized for you. Examples of secondary information include reports and studies by government agencies, trade associations or other businesses within your industry. Most of the research you gather will most likely be secondary.

When conducting primary research, you can gather two basic types of information: exploratory or specific. Exploratory research is open-ended, helps you define a specific problem, and usually involves detailed, unstructured interviews in which lengthy answers are solicited from a small group of respondents. Specific research, on the other hand, is precise in scope and is used to solve a problem that exploratory research has identified. Interviews are structured and formal in approach. Of the two, specific research is the more expensive. When conducting primary research using your own resources, first decide how you'll question your targeted group: by direct mail, telephone, or personal interviews. If you choose a direct-mail questionnaire, the following guidelines will increase your response rate: Questions that are short and to the point A questionnaire that is addressed to specific individuals and is of interest to the respondent A questionnaire of no more than two pages A professionally-prepared cover letter that adequately explains why you're doing this questionnaire A postage-paid, self-addressed envelope to return the questionnaire in. Postage-paid envelopes are available from the post office An incentive, such as "10 percent off your next purchase," to complete the questionnaire

Even following these guidelines, mail response is typically low. A return rate of 3 percent is typical; 5 percent is considered very good. Phone surveys are generally the most cost-effective. Here are some telephone survey guidelines:

Have a script and memorize it--don't read it. Confirm the name of the respondent at the beginning of the conversation. Avoid pauses because respondent interest can quickly drop. Ask if a follow-up call is possible in case you require additional information.

In addition to being cost-effective, speed is another advantage of telephone interviews. A rate of five or six interviews per hour is typical, but experienced interviewers may be able to conduct more. Phone interviews also can cover a wide geographic range relatively inexpensively. Phone costs can be reduced by taking advantage of less expensive rates during certain hours. One of the most effective forms of marketing research is the personal interview. They can be either of these types:

A group survey. Used mostly by big business, group interviews or focus groups are useful brainstorming tools for getting information on product ideas, buying preferences, and purchasing decisions among certain populations. The in-depth interview. These one-on-one interviews are either focused or nondirective. Focused interviews are based on questions selected ahead of time, while nondirective interviews encourage respondents to address certain topics with minimal questioning.

Secondary research uses outside information assembled by government agencies, industry and trade associations, labor unions, media sources, chambers of commerce, and so on. It's usually published in pamphlets, newsletters, trade publications, magazines, and newspapers. Secondary sources include the following:

Public sources. These are usually free, often offer a lot of good information, and include government departments, business departments of public libraries, and so on. Commercial sources. These are valuable, but usually involve cost factors such as subscription and association fees. Commercial sources include research and trade associations, such as Dun & Bradstreet and Robert Morris & Associates, banks and other financial institutions, and publicly traded corporations. Educational institutions. These are frequently overlooked as valuable information sources even though more research is conducted in colleges, universities, and technical institutes than virtually any sector of the business community.

Public Information Sources Government statistics are among the most plentiful and wide-ranging public sources. Helpful government publications include the following. The State and Metropolitan Area Data Book provides a wide variety of statistical information on states and metropolitan areas in the United States. Published by the U.S. Census Bureau, it's available online for $31 through the U.S. Government Printing Office and at larger libraries. The Statistical Abstract of the United States provides tables and graphs of statistics on the social, political and economic conditions in the United States. Published by the Census Bureau, it's available online for $48 through the U.S. Government Printing Office and at larger libraries. U.S. Industry and Trade Outlook presents recent financial performances of U.S. manufacturers and identifies emerging trends. Published by the Commerce Department in cooperation with McGraw-Hill, it's available online for $76 through the U.S. Government Printing Office and at larger libraries. The U.S. government online bookstore at the U.S. Government Printing Office has an abundance wealth of publications on topics ranging from agriculture, aviation, and electronics, to insurance, telecommunications, forest management, and workers' compensation. The U.S. Census Bureau website also contains valuable information relevant to marketing. The Bureau's business publications cover many topics and trades--such as sales volume at furniture stores and payrolls for toy wholesalers--and are useful for small businesses as well as large corporations in retail, wholesale trade, and service industries. Also available are census maps, reports on company statistics regarding different ethnic groups, and reports on county business patterns. One of the most important information resources you'll find is the SBA. The SBA was created by Congress in 1953 to help American entrepreneurs start, run, and grow successful small enterprises. Today there are SBA offices in every state, the District of Columbia, the U.S. Virgin Islands, Puerto Rico, and Guam. Among the services offered by the SBA are financial assistance, counseling services through Small Business Development Centers (SBDCs), management assistance through programs like SCORE, and low-cost publications. The counselors at SCORE can provide you with free consultation on what type of research you need to gather and where you can obtain that information. They may also be able to suggest other means of gathering the information from primary sources. SBDCs generally have extensive business libraries with lots of secondary sources for you to review. One of the best public sources is the business section of your public, or local college or university, library. The services provided vary from library to library but usually include a wide range of government publications with market statistics, a large collection of directories with information on domestic and foreign businesses, and a wide selection of magazines, newspapers and newsletters. Almost every county government publishes population density and distribution figures in accessible census tracts. These show the number of people living in specific areas, such as precincts, water districts or even ten-block neighborhoods. Some counties publish reports that show the population ten years ago, five years ago, and currently, thus indicating population trends. Other public information resources include local chambers of commerce and their business development departments, which encourage new businesses to locate in their communities. They will supply you (usually for free) information on population trends, community income characteristics, payrolls, industrial development and so on. Don't overlook your bank as a resource. Bankers have a wealth of information at their fingertips and are eager to help their small business customers get ahead. All you have to do is ask. Commercial Information Sources Among the best commercial sources of information are research and trade associations. Information gathered by trade associations is usually limited to that particular industry and available only to association members, who have typically paid a membership fee. However, the research gathered by the larger associations is usually thorough, accurate, and worth the cost of membership. Two excellent resources to help you locate a trade association that reports on the business you are researching include the Encyclopedia of Associations (Gale Research), and the Encyclopedia of Business Information Sources (Gale Group). Local newspapers, journals, magazines, and radio and TV stations are some of the most useful commercial information outlets. Not only do they maintain demographic profiles of their audiences (their income, age, gender, amount of disposable income, and types of products and services purchased, what they read, and so on), but many also have information about economic trends in their local areas that could be significant to your business. Contact the sales departments of these businesses and ask them to send you their media kit, since you're working on a marketing plan for a new product and need information about advertising rates and audience demographics. Not only will you learn more about your prospective customers, you'll also learn more about possible advertising outlets for your product or service.

Dun & Bradstreet is another commercial source of market research that offers an abundance of information for making marketing decisions. It operates the world's largest business database and tracks more than 62 million companies around the world, including 11 million in the United States. For more information, visit the Dun & Bradstreet Small Business Solutions website. Finally, there are educational institutions that conduct research in various ways, ranging from faculty-based projects often published under professors' bylines, to student projects, theses, and assignments. You may be able to enlist the aid of students involved in business classes, especially if they're enrolled in an entrepreneurship program. This can be an excellent way of generating research at little or no cost, by engaging students who welcome the professional experience either as interns or for special credit. Contact the university administration and marketing or management studies departments for further information.

Intrinsic forecast method


A forecast based on internal factors, such as an average of past sales. Ant: extrinsic forecast. 1) Those stocks or items used to support production (raw materials and work-in-process items), supporting activities (maintenance, repair, and operating supplies), and customer service (finished goods and spare parts). Demand for inventory may be dependent or independent. Inventory functions are anticipation, hedge, cycle (lot size), fluctuation (safety, buffer, or reserve), transportation (pipeline), and service parts. 2) In the theory of constraints, inventory is defined as those items purchased for resale and includes finished goods, work in process, and raw materials. Inventory is always valued at purchase price and includes no value-added costs, as opposed to the traditional cost accounting practice of adding direct labor and allocating overhead as work in process progresses through the production process. The branch of accounting dealing with valuing inventory. Inventory may be recorded or valued using either a perpetual or a periodic system. A perpetual inventory record is updated frequently or in real time, while a periodic inventory record is counted or measured at fixed time intervals, e.g., every two weeks or monthly. Inventory valuation methods of LIFO, FIFO, or average costs are used with either recording system.

TIME SERIES ANALYSIS This section discusses some of the most common techniques for forecasting from intrinsic time series without explicitly looking for seasonal or trend factors. It also examines time series decomposition. Moving Averages: Perhaps the simplest of all time series forecasting techniques is a moving average. To use this method, we calculate the average of, say, three periods of actual demand and use that to forecast the next period's demand. If this three-period average is to be used as a forecast, it would have to forecast demand in a future period, such as Period 8. Because each average moves ahead one period each time, dropping the oldest value and adding the most recent, this procedure is called a moving average. The number of periods to use in computing the average may be anything from 2 to 12 or more, with 3 or 4 periods being common. If the time series is such that there is no upward or downward trend, then the moving average is a satisfactory technique. If, how-ever, there is any trend or any seasonal effect, then the moving average will not work very well. Moving averages lag behind any trends. BASIC FORECASTING TECHNIQUES Forecasting techniques (using the term forecasting in its broadest sense) can be divided into two categories: qualitative and quantitative. The former, which may involve numbers, uses methodology that is not mathematical. Qualitative techniques rely on judgment, intuition, and subjective evaluation. Among the major techniques within this category are market research (surveys), Delphi (panel consensus), historical analogy, and management estimation (guess). In APICS terminology, all of these techniques represent predictions rather than forecasts (in the narrow sense). The other class of techniques, quantitative, can be divided into intrinsic and extrinsic types. Intrinsic techniques often are called time series analysis techniques. They involve mathematical manipulation of the demand history for an item. These techniques are the most commonly used in forecasting for production and inventory control. The other group of quantitative techniques, extrinsic methods, creates a forecast by attempting to relate demand for an item .to

data about another item, a group of items, or outside factors (such as general economic conditions). Qualitative Techniques We mentioned some aspects of market research in discussing data sources. While these techniques are based on good theory and can yield valuable information for marketing decisions, they are not intended directly to support inventory decisions. Rather, they are intended to support product development and promotion strategies. Data gathered by these methods should be considered in some aggregate inventory or capacity planning decisions, but should not be the sole data source for such decisions. The Delphi, or panel consensus, method may be useful in technological forecasting, that is, in predicting the general state of the market, economy, or technological advances five or more years from now, based on expert opinion, (The name for this method comes from the ancient Greek oracles of Delphi who forecast future events.) The process of creating a Delphi forecast is a variation of the following: A panel of futurists is asked a question, such as, In the next ten years which consumer products do you envision containing microprocessors as an integral part? Each specialist independently submits a list of such items to the panel coordinator. The combined lists then are sent back to each panel member for evaluation and rating of likelihood of occurrence. Panel members may see something that they hadn't thought of and rate it highly. Also members may have second thoughts about items they themselves previously submitted. After a sufficient number of cycles (generally two or three), the result is a list with high consensus. The Delphi technique is not a suitable technique for short-range forecasting, certainly not for individual products. When attempting to forecast demand for a new item, one faces a shortage of historical data. A useful technique is to examine the demand history for an analogous product. If the related product is very similar, quantitative techniques may be used. But if the relationship is tenuous, it may be more appropriate to relate the products only qualitatively in order to get an impression of demand patterns or aggregate demand. For example, the seasonal demand pattern for an established product such as tennis balls may be used to estimate the expected demand pattern for tennis gloves. The actual level'. and trends for the latter cannot be determined in this manner with any precision, but the seasonal pattern may be expected to be similar. Finally, we must not overlook management estimation (intuition) as prediction method. It is widely practiced with regard to new products or unexpected changes in demand for established product lines. Not everyone has estimation talent, however, Some studies have shown that a mathematical technique, consistently followed, will lead to better results than the "expert modification" of those forecasts. Nonetheless, many mathematical technique need significant quantities of historical data that may not be available. When

substantial data are lacking, subjective management judgment may be the better alternative Quantitative Techniques Intrinsic techniques use the time-sequenced history of activity for a particular. item as source data to forecast future activity for that item. Such a history is commonly referred to as a time series. The characteristics of such series can be labeled in various' ways, and the algebraic representation of such graphs can be accomplished b' a variety of methods. Generally, a time series can be thought of as consisting of four components or underlying factors: (1) cyclical, (2) trend, (3) seasonal, and (4) random (or irregular). The cyclical factor traditionally refers to the business cycle to longrange trends in the overall economy. The cyclical factor can be very important in forecasting for long-range planning. However, it is of little us in forecasting demand for individual products, which rarely have sufficient data to permit a distinction between the effect of the business cycle and the effect of the product life cycle. For that reason, the time series used for short term forecasting generally have only trend, seasonal, and random components. The trend component generally is modeled as a line, which is described by an intercept or base level, which we designate L, and a slope, which we designate T. The trend line may be modified by a seasonal phenomenon (S). All data are somewhat muddled by a random, irregular, or otherwise unpredictable variable(R). Mathematically this process is based on a combination multiplicative and additive model of the following sort: D = (L + T) x S + R where D is demand. In this version F, trend, is expressed in the same units as , level, and T may be positive or negative, R, random, is expressed in the same units, Its expected value is 0. S, seasonal, is a dimensionless number having an expected value of 1. For example, we may know that the demand for a certain Bruce Springsteen anthology is averaging 10,000 units per month, with a trend of minus 500 per month (the pattern is to sell 500 fewer units each month). However, the month currently being forecast is December; due to seasonal variation, December averages 40 percent higher than the typical month, Average forecast error using this model has been 800 units. In this example the demand forecast is D= (10,000 - 500) x 1,4 + 0 = 13,300 units Because the average forecast error has been 800 units, and because errors twice the average are not uncommon occurrences, we would not be surprised if December's actual sales were anywhere from 13,300 - 1,600 = 11,700 units to 13,300 + 1,600 = 14,900 units. - Time Series Models

An example of a time series for 25 periods is plotted in Fig. 1 from the numerical data in Table 1. The data might represent the weekly demand for some product. We use x to indicate an observation and t to represent the index of the time period. The observed demand for time t is specifically designated . The data from 1 through T

is: . The lines connecting the observations on the figure are provided only to clarify the picture and otherwise have no meaning.

Table 1. Weekly demand for weeks 1 through 30

Figure 1. A time series of weekly demand

Mathematical Model Our goal is to determine a model that explains the observed data and allows extrapolation into the future to provide a forecast. The simplest model suggests that the time series is a constant with variations about the constant value determined by a

random variable

The upper case

represents the random variable that is the unknown demand at time is a value that has actually been observed. The random

t, while the lower case

variation about the mean value is called the noise, . The noise is assumed to have a mean value of zero and a specified variance. The variations in two different time periods are independent. Specifically

A more complex model includes a linear trend for the data.

Of course (1) and (3) are special cases of a polynomial model.

A model for a seasonal variation might include transcendental functions. The cycle of the model below is 4. The model might be used to represent data for the four seasons of the year.

In every model considered here, the time series is a function only of time and the parameters of the models. We can write

Since for any given time the value of f is a constant and the expected value of zero,

is

The model supposes that there are two components of variability for the time series; the variation of the mean value with time and the noise. Time is the only factor affecting the mean value, while all other factors are described by the noise component. Of course, these assumptions may not in fact be true, but this section is devoted to cases that can be abstracted to this simple form with reasonable accuracy.

One of the problems of time series analysis is to find the best form of the model for a particular situation. In this introductory discussion we are primarily concerned about the simple constant or trend models. We leave the problem of choosing the best model to a more advanced discussion. In the following paragraphs we describe methods for fitting the model, forecasting from the model and measuring the accuracy of the forecast. We illustrate the discussion of this section with the moving average forecasting method. Several other methods are described on later pages.

Fitting Parameters of the Model Once a model is selected and data is available, it is the job of the statistician to find parameter values that best fit the historical data. We can only hope that the resulting model will provide good predictions of future observations. Statisticians usually assume all values in a given sample are equally valid. For time series however, most methods recognize that data from recent times are more representative of current conditions than data from times well in the past. Influences governing the data almost certainly change with time and a method should have the capability of neglecting old data while favoring the new. A model estimate should be able to change over time to reflect changing conditions. In this discussion, the time series model includes one or more parameters. We identify the estimated values of these parameters with hats on the parameter notation.

The procedures also provide estimates of the standard deviation of the noise Again the estimate is indicated with a hat, .

To illustrate these concepts consider the data in Table 1. Say that the statistician has just observed the demand in period 20. She also has available the demands for periods 1 through 19. She cannot know the future, so the information shown as 21 through 30 is not available. The statistician thinks that the factors that influence demand are changing very slowly, if at all, and proposes the simple constant model for the demand as in Eq. 1. With the assumed model, the values of demand are random variables drawn from a population with mean value b. The best estimator of b is the average of the observed data. Using all 20 points the estimate is

This is the best estimate that can be found from the 20 data points. It can be shown that this estimate minimizes the sum of squares of the errors. We note, however, that first data point is given the same weight as the last in the computation. If we think that

the model is actually changing over time, perhaps it is better to use a method that gives less weight to old data and more to the new. One possibility is to include only later data in the estimate. Using the last ten observations and the last five we obtain

The latter two estimates are called moving averages because the range of the observations averaged is moving with time. Which is the better estimate for the application? We really can't tell at this point. The estimator that uses all data points will certainly be the best if the time series follows the assumed constant model, however, if the situation is actually changing, perhaps the estimator with only five data points is better. In general, the moving average estimator is the average of the last m observations.

The quantity m is the moving average interval and is the parameter of this forecasting method.

Forecasting from the Model The purpose of modeling a time series is usually to make forecasts of the future. The forecasts are used directly for making decisions such as ordering replenishments for an inventory or staffing workers for production. They might also be used as part of a mathematical model for a more complex decision analysis. The current time is T, and the data for the actual demands for times 1 through T are known. Say we are attempting to forecast the demand at time demand is the random variable of the realization is , and its ultimate realization is . The unknown . Our forecast

. Of course the best that we can hope to do is estimate the

mean value of . Even if the time series actually follows the assumed model, the future value of the noise is unknowable. Assuming the model is correct

The parameters of the forecast are estimated from the data for times 1 through T. Using a specific value of in this formula provides the forecast for time . When we look at the last T observations as only one of the possible time series that could

have been observed, the forecast is a random variable. We should be able to describe the probability distribution of the random variable, including its mean and variance. For the moving average example, the statistician adopts the model

Assuming T is 20 and using the moving average with ten periods, the estimated parameter is

Since this model has a constant expected value over time, the forecast is the same for all future periods.

Assuming the model is correct, the forecast is the average of m observations all with the same mean and standard deviation, . Since the noise is normally distributed, the forecast is also normally distributed with mean b and standard deviation

Measuring the Accuracy of the Forecast Table 2 shows a series of forecasts for periods 11 through 20 using the data from Table 1. The forecasts are obtained with a moving average using m equal to 10 and equal to 1.

Table 2. Forecast errors Although in practice one might round the forecasts to an integers, we keep fractions here to observe better statistical properties. The error of the forecast is the difference between the observation and the forecast.

One common measure of forecasting error is the mean absolute deviation, MAD.

where n error observations are used to compute the mean. The sample variance of error is also a useful measure. The standard deviation is the square root of the sample variance.

Here is the average error, and n is the number of observations. As n grows the MAD provides a reasonable estimate of the sample standard deviation.

From the example data we compute the MAD and standard deviation for the ten observations. MAD = (8.7 + 2.4 + + 0.9)/10 = 4.11 and We see that 1.25(MAD) = 5.138 is approximately equal to the sample standard deviation. The time series used as an example is simulated with a constant mean. Deviations from the mean are normally distributed with mean zero and standard deviation 5. The error standard deviation includes the combined effects of errors in the model and the noise so one would expect a value greater than 5. Of course, a different realization of the simulation will yield different statistical values.

The Excel worksheet constructed by the Forecasting add-in illustrates the computation for the example data. The data is in column B. Column C holds the moving averages and the oneperiod forecasts are in column D. The error in column E is the difference between columns B and D for rows that have both data and forecast. The standard deviation of the error is in cell E6 and the MAD is in cell E7.

- Moving Average

The moving average forecast is based on the assumption of a constant model.

We estimate the single parameter of the model at time T as average of the last m observations, where m is the moving average interval.

Since the model assumes a constant underlying mean, the forecast for any number of periods in the future is the same as the estimate of the parameter:

In practice the moving average will provide a good estimate of the mean of the time series if the mean is constant or slowly changing. In the case of a constant mean, the largest value of m will give the best estimates of the underlying mean. A longer observation period will average out the effects of variability. The purpose of providing a smaller m is to allow the forecast to respond to a change in the underlying process. To illustrate, we propose a data set that incorporates changes in the underlying mean of the time series. The figure shows the time series used for illustration together with the mean demand from which the series was generated. The mean begins as a constant at 10. Starting at time 21, it increases by one unit in each period until it reaches the value of 20 at time 30. Then it becomes constant again. The data is simulated by adding to the mean, a random noise from a Normal distribution with zero mean and standard deviation 3. The results of the simulation are rounded to the nearest integer.

The table shows the simulated observations used for the example. When we use the table, we must remember that at any given time, only the past data are known.

The estimates of the model parameter,

, for three different values of m are

shown together with the mean of the time series in the figure below. The figure shows the moving average estimate of the mean at each time and not the forecast. The forecasts would shift the moving average curves to the right by periods.

One conclusion is immediately apparent from the figure. For all three estimates the moving average lags behind the linear trend, with the lag increasing with m. The lag is the distance between the model and the estimate in the time dimension. Because of the lag, the moving average underestimates the observations as the mean is increasing. The bias of the estimator is the difference at a specific time in the mean value of the model and the mean value predicted by the moving average. The bias when the mean is increasing is negative. For a decreasing mean, the bias is positive. The lag in time and the bias introduced in the estimate are functions of m. The larger the value of m, the larger the magnitude of lag and bias. For a continuously increasing series with trend a, the values of lag and bias of the estimator of the mean is given in the equations below.

The example curves do not match these equations because the example model is not continuously increasing, rather it starts as a constant, changes to a trend and then becomes constant again. Also the example curves are affected by the noise. The moving average forecast of periods into the future is represented by shifting the curves to the right. The lag and bias increase proportionally. The equations below indicate the lag and bias of a forecast periods into the future when compared to the model parameters. Again, these formulas are for a time series with a constant linear trend.

We should not be surprised at this result. The moving average estimator is based on the assumption of a constant mean, and the example has a linear trend in the mean during a portion of the study period. Since real time series will rarely exactly obey the assumptions of any model, we should be prepared for such results. We can also conclude from the figure that the variability of the noise has the largest effect for smaller m. The estimate is much more volatile for the moving average of 5 than the moving average of 20. We have the conflicting desires to increase m to reduce the effect of variability due to the noise, and to decrease m to make the forecast more responsive to changes in mean. The error is the difference between the actual data and the forecasted value. If the time series is truly a constant value the expected value of the error is zero and the variance of the error is comprised of a term that is a function of term that is the variance of the noise, . and a second

The first term is the variance of the mean estimated with a sample of m observations, assuming the data comes from a population with a constant mean. This term is minimized by making m as large as possible. A large m makes the forecast unresponsive to a change in the underlying time series. To make the forecast responsive to changes, we want m as small as possible (1), but this increases the error variance. Practical forecasting requires an intermediate value.

Forecasting with Excel The Forecasting add-in implements the moving average formulas. The example below shows the analysis provided by the add-in for the sample data in column B. The first 10 observations are indexed -9 through 0. Compared to the table above, the period indices are shifted by -10.

The first ten observations provide the startup values for the estimate and are used to compute the moving average for period 0. The MA(10) column (C) shows the computed moving averages. The moving average parameter m is in cell C3. The Fore(1) column (D) shows a forecast for one period into the future. The forecast interval is in cell D3. When the forecast interval is changed to a larger number the numbers in the Fore column are shifted down. The Err(1) column (E) shows the difference between the observation and the forecast. For example, the observation at time 1 is 6. The forecasted value made from the moving average at time 0 is 11.1. The error then is -5.1. The standard deviation and Mean Average Deviation (MAD) are computed in cells E6 and E7

respectively.

- Exponential Smoothing

As for the moving average, this method assumes that the time series follows a constant model.

The value of b is estimated as the weighted average of the last observation and the last estimate. Here is a parameter in the interval [0, 1].

Rearranging, obtains an alternative form.

The new estimate is the old estimate plus a proportion of the observed error. Because we are supposing a constant model, the forecast is the same as the estimate.

We illustrate the method using the parameter value

= 0.2 and the data below.

The first 10 observations are used to warm up the procedure with the average of the observations providing the estimate for time 10. The average is 11.1. The value of the estimate for time 11 is computed below.

Subsequent estimates are computed with the exponential smoothing formula and are shown in the table.

Only two data elements are required to compute each estimate, the observed data and the old estimate. This contrasts with the moving average which requires the previous observations for the computation. Replacing with its equivalent, we find that the estimate is

Continuing in this fashion, we discover that the estimate is really a weighted sum of all past data.

The larger values of smaller values of is the same as

provide relatively greater weight to more recent data than . With the value of 1, is the last data point. With the value 0,

. The figure shows the parameter estimates obtained for three

different values of together with the mean of the time series. Although the model for this method is a constant, we illustrate the response to a time series with a trend. The simulated example includes a trend of 1 from 20 to 30.

A lag characteristic, similar to the one associated with the moving average estimate,

can also be seen in the figure. The lag and bias for the exponential smoothing estimate can be expressed as a function of value. . The quantity a in the expression is the linear trend

For smaller values of

we obtain a greater lag in response to the trend.

The error is the difference between the actual data and the forecasted value. If the time series is truly a constant value, the expected value of the error is zero and the variance of the error is comprised of a term that is a function of variance of the noise, . and a second term that is the

The variance of the error increases as

increases. To minimize the effect of noise, we

would like to make as small as possible (0), but this makes the forecast unresponsive to a change in the underlying time series. To make the forecast responsive to changes, we want as large as possible (1), but this increases the error variance. Practical forecasting requires an intermediate value. We equate the approximating error for the moving average and exponential smoothing methods.

Solving for , we find the value providing the same approximation error as the moving average.

Using this relation between the parameters of the two methods, we find that the lag and bias introduced by the trend will also be the same. The parameters used in the moving average illustrations of the last page (m = 5, 10, 20) are roughly comparable to the parameters used for exponential smoothing in figure

above (

= 0.4, 0.2, 0.1).

Forecasting with Excel The Forecasting add-in implements the exponential smoothing formulas. The example below shows the analysis provided by the add-in for the sample data. The first 10 observations are indexed -9 through 0. Compared to the table above, the period indices are shifted by -10.

The first ten observations provide the startup values for the estimate. The EXP (C) shows the computed estimates. The Fore(1) column (D) shows a forecast for one period into the future. The forecast interval is in cell D3. When the forecast interval is changed to a larger number the numbers in the forecast column are shifted down. The value of adjust. is in cell C3. When this cell is changed, all the computed cells automatically

The Err(1) column (E) shows the error between the observation and the forecast. The standard deviation and Mean Average Deviation (MAD) are computed in cells E6 and E7. The value in C3 can be used as the optimization variable for the Excel Solver to minimize the error standard deviation or the MAD.

- Regression

The regression forecast is based on the assumption of a model consisting of a constant and a linear trend.

For the purposes of a forecast where the parameters of the model may change, it is more convenient to express the model as a function of from a reference time T. , where is the positive displacement

The forecast is based on estimated parameters.

The parameters at time T are computed from the observation at time T and the previous observations:

Using these m observations, we find the linear equation that minimize thes sum of squares of the difference of the observations from the fitted line. The values of the indices, -

the independent variables for the simple regression. The values of the observations, are the dependent variables. The following parameter estimates are based on the squares normal equations for fitting a linear equation.

The forecast for the expected value for future periods is a constant plus a linear term that depends on the number of periods into the future.

With a trend estimate as part of the forecast, this method will track changes in trend. We use the same data as for the other forecasting methods. We repeat the data below. Recall that the simulated data begins with a constant mean of 10. At time 11 the mean increases with a trend of 1 until time 20 when the mean becomes a constant again with value 20. The noise is simulated using a normal distribution with mean 0 and standard deviation 3.

The estimates for three different values of m are shown together with the mean of the time series in the figure below. The figure shows the estimate of the mean at each time and not the forecast.

The estimate follows the trend line more closely than the moving average or exponential smoothing methods. During the times when the mean is constant, the regression estimate is more variable than the moving average method.

Forecasting with Excel The Forecasting add-in implements the regression formulas. The example below shows the analysis provided by the add-in for the sample data in column B. The first 10 observations are indexed -9 through 0. Compared to the table above, the period indices are shifted by -10.

The first ten observations provide the startup values for the estimate. The constant and trend estimates are shown in columens C and D. The Fore(1) column (E) shows a forecast for one period into the future. The forecast interval is in cell D3. The regression parameter cell C3. When the forecast interval is changed to a larger number the numbers in the Fore column are shifted down. The Err(1) column (F) shows the error between the observation and the forecast. The standard deviation and Mean Average Deviation (MAD) are computed in cells F6 and F7 respectively.

Breakeven Point: -

The breakeven point indicates the level of activity where all costs are fully recovered. It is a no-profit, no-loss situation. If a companys breakeven point is 40%t h e n i t m e a n s i t w i l l s u r v i v e w i t h 4 0 % p r o d u c t i o n l o a d i n n o - l o s s c o n d i t i o n . T h e breakeven is point is the level of survival. A company can be run above the breakeven point and it is the responsibility of the management to cross that level shortly after their gestation period. Gestation period is the time required for a new company till the commercial production starts. The breakeven point is usually expressed in terms of a certain value of sale

Break Even Analysis


By inserting different prices into the formula, you will obtain a number of break even points, one for each possible price charged. If the firm changes the selling price for its product, from $2 to $2.30, in the example above, then it would have to sell only (1000/(2.3 0.6))= 589 units to break even, rather than 715.

To make the results clearer, they can be graphed. To do this, you draw the total cost curve (TC in the diagram) which shows the total cost associated with each possible level of output, the fixed cost curve (FC) which shows the costs that do not vary with output level, and finally the various total revenue lines (R1, R2, and R3) which show the total amount of revenue received at each output level, given the price you will be charging. The break even points (A,B,C) are the points of intersection between the total cost curve (TC) and a total revenue curve (R1, R2, or R3). The break even quantity at each selling price can be read off the horizontal axis and the break even price at each selling price can be read off the vertical axis. The total cost, total revenue, and fixed cost curves can each be constructed with simple formulae. For example, the total revenue curve is simply the product of selling price times quantity for each output quantity. The data used in these formulae come either from accounting records or from various estimation techniques such as regression analysis.
Application

The break-even point is one of the simplest yet least used analytical tools in management. It helps to provide a dynamic view of the relationships between sales, costs and profits. A

better understanding of break-even, for example, is expressing break-even sales as a percentage of actual salescan give managers a chance to understand when to expect to break even (by linking the percent to when in the week/month this percent of sales might occur). The break-even point is a special case of Target Income Sales, where Target Income is 0 (breaking even). This is very important for financial analysis.

Limitations
Break-even analysis is only a supply side (i.e. costs only) analysis, as it tells you nothing about what sales are actually likely to be for the product at these various prices. It assumes that fixed costs (FC) are constant. Although this is true in the short run, an increase in the scale of production is likely to cause fixed costs to rise. It assumes average variable costs are constant per unit of output, at least in the range of likely quantities of sales. (i.e. linearity) It assumes that the quantity of goods produced is equal to the quantity of goods sold (i.e., there is no change in the quantity of goods held in inventory at the beginning of the period and the quantity of goods held in inventory at the end of the period). In multi-product companies, it assumes that the relative proportions of each product sold and produced are constant (i.e., the sales mix is constant).

Appraisal of Break-even Analysis

The main advantage of break-even analysis is that it points out the relationship between cost, production volume and returns. It can be extended to show how changes in fixed costvariable cost relationships, in commodity prices, or in revenues, will affect profit levels and break-even points. Limitations of break-even analysis include:
It is best suited to the analysis of one product at a time; It may be difficult to classify a cost as all variable or all fixed; and There may be a tendency to continue to use a break-even analysis after the cost and income functions have changed.

Break-even analysis is most useful when used with partial budgeting or capital budgeting techniques. The major benefit to using break-even analysis is that it indicates the lowest amount of business activity necessary to prevent losses.

Computation
In the linear Cost-Volume-Profit Analysis model,[2] the break-even point (in terms of Unit Sales (X)) can be directly computed in terms of Total Revenue (TR) and Total Costs (TC) as:

where:
TFC is Total Fixed Costs, P is Unit Sale Price, and V is Unit Variable Cost.

The Break-Even Point can alternatively be computed as the point where Contribution equals Fixed Costs.

The quantity is of interest in its own right, and is called the Unit Contribution Margin (C): it is the marginal profit per unit, or alternatively the portion of each sale that contributes to Fixed Costs. Thus the break-even point can be more simply computed as the point where Total Contribution = Total Fixed Cost:

In currency units (sales proceeds) to reach break-even, one can use the above calculation and multiply by Price, or equivalently use the Contribution Margin Ratio (Unit Contribution

Margin over Price) to compute it as:

R=C, Where R is revenue generated, C is cost incurred i.e. Fixed costs + Variable Costs or Q * P(Price per unit) = TFC + Q * VC(Price per unit), Q * P - Q * VC = TFC, Q * (P - VC) = TFC, or, Break Even Analysis Q = TFC/c/s ratio=Break Even

Margin of Safety
Margin of safety represents the strength of the business. It enables a business to know what is the exact amount it has gained or lost and whether they are over or below the break even point.[3] margin of safety = (current output - breakeven output) margin of safety% = (current output - breakeven output)/current output x 100 When dealing with budgets you would instead replace "Current output" with "Budgeted output". If P/V ratio is given then profit/ PV ratio

In unit
Break Even = FC / (SP VC) where FC is Fixed Cost, SP is Selling Price and VC is Variable Cost

Inventory valuation
From Wikipedia, the free encyclopedia This article may require cleanup to meet Wikipedia's quality standards. (Consider using more specific cleanup instructions Please help improve this article if you can. The talk page may contain suggestions. (September 2008)

This article does not cite any references or sources. Please help improve this article by adding citations to reliable sources Unsourced material may be challenged and removed. (September
2008)

An inventory valuation allows a company to provide a monetary value for items that make up their inventory. Inventories are usually the largest current asset of a business, and proper measurement of them is necessary to assure accurate financial statements. If inventory is not properly measured, expenses and revenues cannot be properly matched and a company could make poor business decisions.

Contents
[hide] 1 Inventory accounting systems 2 Inventory costing methods - periodic

3 Periodic versus perpetual systems 4 Using non-cost methods to value inventory 5 Methods used to estimate inventory cost 6 External links

Inventory accounting systems


The two most widely used inventory accounting systems are the periodic and the perpetual.
Perpetual: The perpetual inventory system requires accounting records to show the amount of inventory on hand at all times. It maintains a separate account in the subsidiary ledger for each good in stock, and the account is updated each time a quantity is added or taken out. Periodic: In the periodic inventory system, sales are recorded as they occur but the inventory is not updated. A physical inventory must be taken at the end of the year to determine the cost of goods sold. Regardless of what inventory accounting system is used, it is good practice to perform a physical inventory at least once a year.

Inventory costing methods - periodic


The periodic system records only revenue each time a sale is made. In order to determine the cost of goods sold, a physical inventory must be taken. The most commonly used inventory costing methods under a periodic system are:
1. first-in first-out (FIFO), 2. last-in first-out (LIFO), and 3. average cost or weighted average cost.

These methods produce different results because their flow of costs are based upon different assumptions. The FIFO method bases its cost flow on the chronological order purchases are made, while the LIFO method bases it cost flow in a reverse chronological order. The average cost method produces a cost flow based on a weighted average of unit costs.

Periodic versus perpetual systems


There are fundamental differences for accounting and reporting merchandise inventory transactions under the periodic and perpetual inventory systems. To record purchases, the periodic system debits the Purchases account while the perpetual system debits the Merchandise Inventory account. To record sales, the perpetual system requires an extra entry to debit the Cost of goods sold and credit Merchandise Inventory. By recording the cost of goods sold for each sale, the perpetual inventory system alleviated the need for adjusting entries and calculation of the goods sold at the end of a financial period, both of which the periodic inventory system requires. In Perpetual Inventory System there must be actual figures and facts.

Using non-cost methods to value inventory

Under certain circumstances, valuation of inventory based on cost is impractical. If the market price of a good drops below the purchase price, the lower of cost or market method of valuation is recommended. This method allows declines in inventory value to be offset against income of the period. When goods are damaged or obsolete, and can only be sold for below purchase prices, they should be recorded at net realizable value. The realizable value is the estimated selling price less any expense incurred to dispose of the good.

Methods used to estimate inventory cost


In certain business operations, taking a physical inventory is impossible or impractical. In such a situation, it is necessary to estimate the inventory cost. Two very popular methods are 1)- retail inventory method, and 2)- gross profit margin) method. The retail inventory method uses a cost to retail price ratio. The physical inventory is valued at retail, and it is multiplied by the cost ratio (or percentage) to determine the estimated cost of the ending inventory. The gross profit method uses the previous years average gross profit margin (i.e. sales minus cost of goods sold divided by sales). Current year gross profit is estimated by multiplying current year sales by that gross profit margin, the current year cost of goods sold is estimated by subtracting the gross profit from sales, and the ending inventory is estimated by adding cost of goods sold to goods available for sale.

Average cost
From Wikipedia, the free encyclopedia

In economics, average cost or unit cost is equal to total cost divided by the number of goods produced (the output quantity, Q). It is also equal to the sum of average variable costs (total variable costs divided by Q) plus average fixed costs (total fixed costs divided by Q). Average costs may be dependent on the time period considered (increasing production may be expensive or impossible in the short term, for example). Average costs affect the curve and are a fundamental component of supply and demand.

Contents
[hide] 1 Short-run average cost 2 Long-run average cost 3 Relationship to marginal cost 4 Relationship between AC, AFC, AVC and MC 5 See also 6 References 7 External links

[edit] Short-run average cost


Average cost is distinct from the price, and depends on the interaction with demand through elasticity of demand and elasticity of supply. In cases of perfect competition, price may be lower than average cost due to marginal cost pricing. Short-run average cost will vary in relation to the quantity produced unless fixed costs are zero and variable costs constant. A cost curve can be plotted, with cost on the y-axis and quantity on the x-axis. Marginal costs are often shown on these graphs, with marginal cost representing the cost of the last unit produced at each point; marginal costs are the derivative of total or variable costs. A typical average cost curve will have a U-shape, because fixed costs are all incurred before any production takes place and marginal costs are typically increasing, because of diminishing marginal productivity. In this "typical" case, for low levels of production marginal costs are below average costs, so average costs are decreasing as quantity increases. An increasing marginal cost curve will intersect a U-shaped average cost curve at its minimum, after which point the average cost curve begins to slope upward. For further increases in production beyond this minimum, marginal cost is above average costs, so average costs are increasing as quantity increases. An example of this typical case would be a factory designed to produce a specific quantity of widgets per period: below a certain production level, average cost is higher due to under-utilised equipment, while above that level, production bottlenecks increase the average cost.

[edit] Long-run average cost


The long run is a time frame in which the firm can vary the quantities used of all inputs, even physical capital. A long-run average cost curve can be upward sloping, downward sloping, or downward sloping at relatively low levels of output and upward sloping at relatively high levels of output, with an in-between level of output at which the slope of long-run average cost is zero. The typical long-run average cost curve is U-shaped, by definition reflecting increasing returns to scale where negatively-sloped and decreasing returns to scale where positively sloped. If the firm is a perfect competitor in all input markets, and thus the per-unit prices of all its inputs are unaffected by how much of the inputs the firm purchases, then it can be shown [2][3] that at a particular level of output, the firm has economies of scale (i.e., is operating in a downward sloping region of the long-run average cost curve) if and only if it has increasing returns to scale. Likewise, it has diseconomies of scale (is operating in an upward sloping region of the long-run average cost curve) if and only if it has decreasing returns to scale, and has neither economies nor diseconomies of scale if it has constant returns to scale. In this case, with perfect competition in the output market the long-run market equilibrium will involve all firms operating at the minimum point of their long-run average cost curves (i.e., at the borderline between economies and diseconomies of scale). If, however, the firm is not a perfect competitor in the input markets, then the above conclusions are modified. For example, if there are increasing returns to scale in some range of output levels, but the firm is so big in one or more input markets that increasing its purchases of an input drives up the input's per-unit cost, then the firm could have diseconomies of scale in that range of output levels. Conversely, if the firm is able to get bulk discounts of an input, then it could have economies of scale in some range of output levels even if it has decreasing returns in production in that output range. In some industries, the LRAC is always declining (economies of scale exist indefinitely).

This means that the largest firm tends to have a cost advantage, and the industry tends naturally to become a monopoly, and hence is called a natural monopoly. Natural monopolies tend to exist in industries with high capital costs in relation to variable costs, such as water supply and electricity supply. Long run average cost is the unit cost of producing a certain output when all inputs are variable. The behavioral assumption is that the firm will choose that combination of inputs that will produce the desired quantity at the lowest possible cost.

[edit] Relationship to marginal cost


When average cost is declining as output increases, marginal cost is less than average cost. When average cost is rising, marginal cost is greater than average cost. When average cost is neither rising nor falling (at a minimum or maximum), marginal cost equals average cost. Other special cases for average cost and marginal cost appear frequently:
Constant marginal cost/high fixed costs: each additional unit of production is produced at constant additional expense per unit. The average cost curve slopes down continuously, approaching marginal cost. An example may be hydroelectric generation, which has no fuel expense, limited maintenance expenses and a high up-front fixed cost (ignoring irregular maintenance costs or useful lifespan). Industries where fixed marginal costs obtain, such as electrical transmission networks, may meet the conditions for a natural monopoly, because once capacity is built, the marginal cost to the incumbent of serving an additional customer is always lower than the average cost for a potential competitor. The high fixed capital costs are a barrier to entry. Minimum efficient scale / maximum efficient scale: marginal or average costs may be non-linear, or have discontinuities. Average cost curves may therefore only be shown over a limited scale of production for a given technology. For example, a nuclear plant would be extremely inefficient (very high average cost) for production in small quantities; similarly, its maximum output for any given time period may essentially be fixed, and production above that level may be technically impossible, dangerous or extremely costly. The long run elasticity of supply higher, as new plants could be built and brought on-line. Zero fixed costs (long-run analysis) / constant marginal cost: since there are no economies of scale, average cost will be equal to the constant marginal cost.

[edit] Relationship between AC, AFC, AVC and MC


1. The Average Fixed Cost curve (AFC) starts from a height and goes on declining continuously as production increases. 2. The Average Variable Cost curve, Average Cost curve and the Marginal Cost curve start from a height, reach the minimum points, then rise sharply and continuously. 3. The Average Fixed Cost curve approaches zero asymptotically. The Average Variable Cost curve is never parallel to or as high as the Average Cost curve due to the existence of positive Average Fixed Costs at all levels of production; but the Average Variable Cost curve asymptotically approaches the Average Cost curve from below. 4. The Marginal Cost curve always passes through the minimum points of the Average

Variable Cost and Average Cost curves, though the Average Variable Cost curve attains the minimum point prior to that of the Average Cost curve.

FIFO and LIFO accounting


From Wikipedia, the free encyclopedia

Accountancy
Key concepts Accountant Accounting period Bookkeeping Cash and accrual basis Cash flow forecasting Chart of accounts Journal Special journals Constant item purchasing power accounting Cost of goods sold Credit terms Debits and credits Double-entry system Mark-to-market accounting FIFO and LIFO GAAP / IFRS General ledger Goodwill Historical cost Matching principle Revenue recognition Trial balance Fields of accounting Cost Financial Forensic Fund Management Tax (U.S.) Financial statements Balance sheet Cash flow statement Statement of retained earnings Income statement Notes Management discussion and analysis XBRL Auditing Auditor's report Financial audit GAAS / ISA Internal audit SarbanesOxley Act Accounting qualifications CA CPA CCA CGA CMA CAT CFA CIIA IIA CTP ACCA ICWAI This box: view talk edit

FIFO and LIFO Methods are accounting techniques used in managing inventory and

financial matters involving the amount of money a company has tied up within inventory of produced goods, raw materials, parts, components, or feed stocks. FIFO stands for first-in, first-out, meaning that the oldest inventory items are recorded as sold first but do not necessarily mean that the exact oldest physical object has been tracked and sold; this is just an inventory technique. LIFO stands for last-in, first-out, meaning that the most recently produced items are recorded as sold first. Since the 1970s, U.S. companies have tended to use LIFO, which reduces their income taxes in times of inflation. LIFO is only used in Japan and the The difference between the cost of an inventory calculated under the FIFO and LIFO methods is called the LIFO reserve. This reserve is essentially the amount by which an entity's taxable income has been deferred by using the LIFO method.[2]

Average cost method


From Wikipedia, the free encyclopedia (Redirected from Weighted average cost) This article does not cite any references or sources. Please help improve this article by adding citations to reliable sources Unsourced material may be challenged and removed. (December
2009)

Accountancy
Key concepts Accountant Accounting period Bookkeeping Cash and accrual basis Cash flow forecasting Chart of accounts Journal Special journals Constant item purchasing power accounting Cost of goods sold Credit terms Debits and credits Double-entry system Mark-to-market accounting FIFO and LIFO GAAP / IFRS General ledger Goodwill Historical cost Matching principle Revenue recognition Trial balance Fields of accounting Cost Financial Forensic Fund Management Tax (U.S.) Financial statements Balance sheet Cash flow statement Statement of retained earnings Income statement Notes Management discussion

and analysis XBRL Auditing Auditor's report Financial audit GAAS / ISA Internal audit SarbanesOxley Act Accounting qualifications CA CPA CCA CGA CMA CAT CFA CIIA IIA CTP ACCA ICWAI This box: view talk edit

Under the average cost method, it is assumed that the cost of inventory is based on the average cost of the goods available for sale during the period. The average cost is computed by dividing the total cost of goods available for sale by the total units available for sale. This gives a weighted-average unit cost that is applied to the units in the ending inventory. There are two commonly used average cost methods: Simple Weighted-average cost method and moving-average cost method.

Contents
[hide] 1 Weighted Average Cost 2 Moving-Average Cost 3 See also 4 References

[edit] Weighted Average Cost


Weighted Average Cost is a method of calculating Ending Inventory cost. It is also known as AVCO It takes Cost of Goods Available for Sale and divides it by the total amount of goods from Beginning Inventory and Purchases. This gives a Weighted Average Cost per Unit physical count is then performed on the ending inventory to determine the amount of goods left. Finally, this amount is multiplied by Weighted Average Cost per Unit to give an estimate of ending inventory cost.

[edit] Moving-Average Cost


Moving-Average (Unit) Cost is a method of calculating Ending Inventory cost. Assume that both Beginning Inventory and beginning inventory cost are known. From them

the Cost per Unit of Beginning Inventory can be calculated. During the year, multiple purchases were made. Each time, purchase costs are added to beginning inventory cost to get Cost of Current Inventory. Similarly, the number of units bought is added to beginning inventory to get Current Goods Available for Sale. After each purchase, Cost of Current Inventory is divided by Current Goods Available for Sale to get Current Cost per Unit on Goods. Also during the year, multiple sales happened. The Current Goods Available for Sale is deducted by the amount of goods sold, and the Cost of Current Inventory is deducted by the amount of goods sold times the latest (before this sale) Current Cost per Unit on Goods. This deducted amount is added to Cost of Goods Sold. At the end of the year, the last Cost per Unit on Goods, along with a physical count, is used to determine ending inventory cost.

ABC analysis
From Wikipedia, the free encyclopedia

The ABC analysis is a business term used to define an inventory categorization technique often used in materials management. It is also known as Selective Inventory Control stands for Always Better Control. Policies based on ABC analysis: A ITEMS: very tight control and accurate records B ITEMS: LESS TIGHTLY CONTROLLED and good records C ITEMS: simplest controls possible and minimal records The ABC analysis provides a mechanism for identifying items that will have a significant impact on overall inventory cost, [1] while also providing a mechanism for identifying different categories of stock that will require different management and controls. The ABC analysis suggests that inventories of an organization are not of equal value. Thus, the inventory is grouped into three categories (A, B, and C) in order of their estimated importance. 'A' items are very important for an organization. Because of the high value of these A items, frequent value analysis is required. In addition to that, an organization needs to choose an appropriate order pattern (e.g. Just- in- time) to avoid excess capacity. 'B' items are important, but of course less important, than A items and more important than C items. Therefore B items are intergroup items. 'C' items are marginally important. [4]

Contents
[hide] 1 ABC analysis categories 2 ABC Analysis in ERP package 3 Example of the Application of Weighed Operation based on ABC class 4 References 5 See also 6 External Links

[edit] ABC analysis categories


There are no fixed threshold for each class, different proportion can be applied based on objective and criteria. ABC Analysis is similar to the Pareto principle in that the 'A' items will typically account for a large proportion of the overall value but a small percentage of number of items.[5] Example of ABC class are
A items 20% of the items accounts for 70% of the annual consumption value of the items. B items - 30% of the items accounts for 25% of the annual consumption value of the items. C items - 50% of the items accounts for 5% of the annual consumption value of the items.[6]

Another recommended breakdown of ABC classes[7]:


1. "A" approximately 10% of items or 68.6% of value 2. "B" approximately 20% of items or 23.3% of value 3. "C" approximately 70% of items or 10.1% of value

[edit] ABC Analysis in ERP package


Major ERP packages (SAP, Oracle, etc.) have built in function of ABC analysis. User can execute ABC analysis based on user defined criteria and system apply ABC code to items (parts). See detail at external link.

[edit] Example of the Application of Weighed Operation based on ABC class


Actual distribution of ABC class in the electronics manufacturing company with 4051 active parts.

Distribution of ABC class ABC class A B C Total Number of items 5 10 85 100 Total amount required 75 15 10 100

Using this distribution of ABC class and change total number of the parts to 4000.
Uniform Purchase

When you apply equal purchasing policy to all 4000 components, example weekly delivery and re-order point (safety stock) of 2 week supply assuming that there are no lot size constraints, the factory will have 16000 delivery in 4 weeks and average inventory will be 2.5 week supply.
Application of Weighed Purchasing condition Uniform condition Items Conditions Items Weighed condition Conditions

A-class items 200

Re-order point=1 week supply Delivery frequency=weekly Re-order point=2 week supply Delivery frequency=bi-weekly

All items
4000

B-class week supply items Delivery 400 frequency=weekly

Re-order point=2

C-class Re-order point=3 week supply items Delivery frequency=every 4 weeks 3400 Weighed Purchase

In comparison, when weighed purchasing policy applied based on ABC class, example C class monthly (every 4 week) delivery with re-order point of 3 week supply, B class Biweekly delivery with re-order point of 2 weeks supply, A class weekly delivery with reorder point of 1 week supply, total number of delivery in 4 weeks will be (A 200x4=800)+ (B 400x2=800)+(C 3400x1=3400)=5000 and average inventory will be (A 75%x1.5weeks) +(B 15%x3weeks)+(C 10%x3.5weeks)=1.925 week supply.
Comparison of "Equal" and "Weighed" Purchase (4 weeks span) No No aver of of avera age deli deli ge sup very very suppl ply in 4 in 4 y leve wee wee level l ks ks same delivery frequency, safety 2.5 1.5 stock reduced from 2.5 to 1.5 week 800 wee weeksa, require tighter control with s ksa more manhours.

200 75%

800

increased safety stock level by 2.5 3 400 15% 1600 week 800 wee 20%, delivery frequency reduced to s ks half. Less manhour required increased safety stock from 2.5 to 2.5 3.5 3.5 week supply, delivery frequency 3400 10% 13600 week 3400 wee is one quarter. Drastically reduced s ks manhour requirement.

1.9 average inventory value reduced by 2.5 Tota 100 25 23%, delivery frequency reduced by 4000 16000 week 5000 l % wee 69%. Overall reduction of man s ks requirement

a)

A class item can be applied much tighter control like JIT daily delivery. If daily delivery with one day stock is applied, delivery frequency will be 4000 and average inventory level of A class item will be 1.5 days supply and total inventory level will be 1.025 week supply. reduction of inventory by 59%. Total delivery frequency also reduced to half from 16000 to 8200.
Result

By applying weighed control based on ABC classification, required man-hours and inventory level are drastically reduced.
Inventory Analysis
Below are the type of Inventory analysis , which are carried out for classifying materials so that materials and processes can be treated differently based on the classes of materials. ABC XYZ FSN VED SOS GOLF HML SDE ABC Analysis ABC analysis classifies the materials based on their consumption during a particular time period( year). Depending upon the company to company A B & C items can be as under A Approx 5% to 10% of the Items accounting for 60% to 80% of the consumption value B Approx 10 % to 30% of the Items accounting for 10% to 30% of the consumption value C Approx 60% to 85% of the Items accounting for 5% to 15% of the consumption value Usage of ABC Analysis 1. In day to day warehouse operations, materials are some time under issued, over issued, issued and not accounted into the system, misplaced, stolen etc. This results into inaccuracy in the inventory. Cycle counting is the process to count and reconcile the materials. Ideally, every material in the warehouse should be counted during a fixed interval (every year) for maintaining 100% accuracy, but counting & reconciling every material is not cost effective and very expensive. To count the accuracy of the inventory in a cost effective manner, it is recommended to count the materials based on inventory classification. If A class materials are counted within a fixed interval (could be six months or a year) then you need to count only 5% to 10% of the total materials and it will cover 60% to 80% of the inventory value. That means that you only count 5 % to 10% of the materials and remove the inaccuracy from the inventory value from 60% to 80%. Similarly B class materials can also be counted on a less frequency ( from once in 18 months to 24 Months) as the nos of materials become higher and C class materials at even lesser frequency(Once in 27 months to 36 months) as material becomes more (60% to 85% of the total materials). 2 . An inventory controller shall be concentrating more on the A class items for reducing the inventory as he/she shall be concentrating only 5% to 10% of the total items and shall be getting the opportunity to reduce inventory on 60% to 80% of the value. 3. Any reduction in lead time of A class items shall result in reduction in inventory, so procurement manager will workout with suppliers to reduce the lead time. 4. On issue of materials, Tight control on A class, Moderate control on B class, Loose Control on C class. So A class items may be issued after getting the approvals from Senior Executives of the company. B may be moderately controlled . Very little control can be exercised while issuing C class item

Important Note: An A class item need not necessarily be a fast moving item. Alternatively C class may or may not be a fast moving item. ABC analysis is purely based on the dollar value of consumption.
XYZ Analysis: XYZ analysis is calculated by dividing an item's current stock value by the total stock value of the stores. The items are first sorted on descending order of their current stock value. The values are then accumulated till values reach say 60% of the total stock value. These items are grouped as 'X'. Similarly, other items are grouped as 'Y' and 'Z' items based on their accumulated value reaching another 30% & 10% respectively. The XYZ analysis gives, you an immediate view of which items are expensive to hold. Through this analysis, you can reduce your money locked up by keeping as little as possible of these expensive items. SOS Analysis: Seasonal, Off Seasonal Report helps you to view seasonal required items. S- For seasonal Materials OS - For non-seasonal Materials Purchase planning has to be done if the material is seasonal as material shall be available for a particular time period of the year. Seasonal Items can be further classified into two groups 1. Leechee is seasonal fruit which is available only for one month in year. If any Juice and pulp company wants to buy this fruit then the procurement department shall have to plan in advance the requirement and procurement job becomes concentrated only for one month. Other than this issue, shelf life and storage is also a big problem as the plan is consume is throughout the year while the buying time available is only one month. 2. Some materials are seasonal but are available throughout the year such as grains, and other non perishable items. These items are bought during season and these items are cheaply available during season. The company can take the advantage of economies of scale in buying these materials in bulk. But at the same time the inventory carrying cost should not go beyond the profit margins while holding the large inventory Non-seasonal materials are available throughout the year without any significant price variation. Non seasonal items can be Plastics, Metals etc. The prices of these materials are independent of the season. HML Analysis This analysis is done for classifying the materials based on their prices H -High Price Materials M-Medium Price Material L Low price materials Procurement department is more concerned with prices of materials so this analysis helps them to take them the decisions such as, who will procure what based on the hierarchy and price of material . Some of the other objective can be as under Helps in taking the decision such as whether to procure in exact requirement or opt for EOQ or purchase only when needed When it is desired to evolve purchasing policies then also HML analysis is carried out i.e. whether to purchase in exact quantities as required or to purchase in EOQ or purchase only when absolutely necessary When the objective is to keep control over consumption at the department level then authorization to draw materials from the stores will be given to senior staff for H item, next lower level in seniority for M class item and junior level staff for L class items Cycle counting can also be planned based on HML analysis. H class items shall be counted very frequently, M class shall be counted at lesser frequency and L class shall be counted at least frequency as compared to H & M class SDE Analysis : S-Scarce Material i.e. hardly available D-Difficult material i.e. difficult in sourcing. E-Easy materials i.e. materials available easily SDE analysis is done based on purchasing problems associated with items on day-to-day basis. Some of the purchasing problems are as under: Long Lead Times. Scarcity and hardly available Sourcing the same material from many geographically scattered sources Uncertain and unreliable sources of supply Purchasing department classifies these materials and formulates the strategy and policy of procurement of these items accordingly. So classification of materials is done based on level of difficulty in sourcing

S Class Materials These materials are always in shortage and difficult in procurement. These materials sometimes require government approvals, procurement through government agencies. Normally one has to make the payment in advance for sourcing these materials. Purchase policies are very liberal for such materials D Class Materials: These materials though not easy to procure but are available at a longer lead times and source of supply may be very far from the consumption. Procurement of these materials requires planning and scheduling in advance. Particular OEM spares of the machinery may fall under this category as that OEM may be very far from the ordering or consumption location. E Class Materials These materials are normally standard items and easily available in the market and can be purchased anytime.
GOLF Analysis: Government, Ordinary, Local, and Foreign Report help you to do material analysis based on location and type of organization. G -Government suppliers O- Ordinary or non government suppliers L - Local suppliers F - Foreign suppliers FSN Analysis: Classification of materials based on movement i.e. Fast Moving Slow Moving and Non Moving. Some times also called as FNS (Fast Moving, Normal Moving and slow moving). Want to know how to do FSN analysis ? Click Here

VED Analysis : By using this analysis for material we classify materials according to their criticality to the production i.e. how and to what extent the material M1 is going to effect the production if the material M1 is not available. V- Vital, E- Essential, D- Desirable. V class item is the item, if not issued, then the production stop shall result, Water, Power, Compressed Air are some the Vital class Items Essential Class of items- If these items are not available then stockout cost is very high. Desirable Class of items- If these items are not available then there is not going to be immediate production loss; stock out cost is very less.

Inventory Management
Inventory management is primarily about specifying the shape and percentage of stocked goods. It is required at different locations within a facility or within many locations of a supply network to proceed the regular and planned course of production and stock of materials. The scope of inventory management concerns the fine lines between replenishment lead time, carrying costs of inventory, asset management, inventory forecasting, inventory valuation, inventory visibility, future inventory price forecasting, physical inventory, available physical space for inventory, quality management, replenishment, returns and defective goods and demand forecasting. Balancing these competing requirements leads to optimal inventory levels, which is an on-going process as the business needs shift and react to the wider environment. Inventory management involves a retailer seeking to acquire and maintain a proper merchandise assortment while ordering, shipping, handling, and related costs are kept in check. It also involves systems and processes that identify inventory requirements, set targets, provide replenishment techniques, report actual and projected inventory status and handles all functions related to the tracking and management of material. This would include the monitoring of material moved into and out of stockroom locations and the reconciling of the inventory balances. Also may include ABC analysis, lot tracking, cycle

counting support etc. Management of the inventories, with the primary objective of determining/controlling stock levels within the physical distribution function to balance the need for product availability against the need for minimizing stock holding and handling costs.

[edit] Business inventory


[edit] The reasons for keeping stock

There are three basic reasons for keeping an inventory:


1. Time - The time lags present in the supply chain, from supplier to user at every stage, requires that you maintain certain amounts of inventory to use in this "lead time." However, in practice, inventory is to be maintained for consumption during 'variations in lead time'. Lead time itself can be addressed by ordering that many days in advance. 2. Uncertainty - Inventories are maintained as buffers to meet uncertainties in demand, supply and movements of goods. 3. Economies of scale - Ideal condition of "one unit at a time at a place where a user needs it, when he needs it" principle tends to incur lots of costs in terms of logistics. So bulk buying, movement and storing brings in economies of scale, thus inventory.

All these stock reasons can apply to any owner or product


[edit] Special terms used in dealing with inventory Stock Keeping Unit (SKU) is a unique combination of all the components that are assembled into the purchasable item. Therefore, any change in the packaging or product is a new SKU. This level of detailed specification assists in managing inventory. Stockout means running out of the inventory of an SKU.[2] "New old stock" (sometimes abbreviated NOS) is a term used in business to refer to merchandise being offered for sale that was manufactured long ago but that has never been used. Such merchandise may not be produced anymore, and the new old stock may represent the only market source of a particular item at the present time.

[edit] Typology 1. Buffer/safety stock 2. Cycle stock (Used in batch processes, it is the available inventory, excluding buffer stock) 3. De-coupling (Buffer stock held between the machines in a single process which serves as a buffer for the next one allowing smooth flow of work instead of waiting the previous or next machine in the same process) 4. Anticipation stock (Building up extra stock for periods of increased demand - e.g. ice cream for summer) 5. Pipeline stock (Goods still in transit or in the process of distribution have left the factory but not arrived at the customer yet) [edit] Inventory examples

While accountants often discuss inventory in terms of goods for sale, organizations manufacturers, service-providers and not-for-profits - also have inventories (fixtures, furniture, supplies, ...) that they do not intend to sell. Manufacturers', distributors wholesalers' inventory tends to cluster in warehouses. Retailers' inventory may exist in a warehouse or in a shop or store accessible to customers. Inventories not intended for sale to customers or to clients may be held in any premises an organization uses. Stock ties up cash and, if uncontrolled, it will be impossible to know the actual level of stocks and therefore impossible to control them. While the reasons for holding stock were covered earlier, most manufacturing organizations usually divide their "goods for sale" inventory into:
Raw materials - materials and components scheduled for use in making a product. Work in process, WIP - materials and components that have begun their transformation to finished goods. Finished goods - goods ready for sale to customers. Goods for resale - returned goods that are salable.

For example: [edit] Manufacturing A canned food manufacturer's materials inventory includes the ingredients to form the foods to be canned, empty cans and their lids (or coils of steel or aluminum for constructing those components), labels, and anything else (solder, glue, ...) that will form part of a finished can. The firm's work in process includes those materials from the time of release to the work floor until they become complete and ready for sale to wholesale or retail customers. This may be vats of prepared food, filled cans not yet labeled or sub-assemblies of food components. It may also include finished cans that are not yet packaged into cartons or pallets. Its finished good inventory consists of all the filled and labeled cans of food in its warehouse that it has manufactured and wishes to sell to food distributors (wholesalers), to grocery stores (retailers), and even perhaps to consumers through arrangements like stores and outlet centers. Examples of case studies are very revealing, and consistently show that the improvement of inventory management has two parts: the capability of the organisation to manage inventory, and the way in which it chooses to do so. For example, a company may wish to install a complex inventory system, but unless there is a good understanding of the role of inventory and its parameters, and an effective business process to support that, the system cannot bring the necessary benefits to the organisation in isolation. Typical Inventory Management techniques include Pareto Curve ABC Classification Economic Order Quantity Management. A more sophisticated method takes these two techniques further, combining certain aspects of each to create The K Curve Methodology. [4] A case study of k-curve[5] benefits to one company shows a successful implementation. Unnecessary inventory adds enormously to the working capital tied up in the business, as well as the complexity of the supply chain. Reduction and elimination of these inventory 'wait' states is a key concept in Lean.[6] Too big an inventory reduction too quickly can cause a business to be anorexic. There are well-proven processes and techniques to assist in inventory planning and strategy, both at the business overview and part number level. Many of the big MRP/and ERP systems do not offer the necessary inventory planning tools within their integrated planning applications.

[edit] Principle of inventory proportionality


[edit] Purpose

Inventory proportionality is the goal of demand-driven inventory management. The primary optimal outcome is to have the same number of days' (or hours', etc.) worth of inventory on hand across all products so that the time of runout of all products would be simultaneous. In such a case, there is no "excess inventory," that is, inventory that would be left over of another product when the first product runs out. Excess inventory is sub-optimal because the money spent to obtain it could have been utilized better elsewhere, i.e. to the product that just ran out. The secondary goal of inventory proportionality is inventory minimization. By integrating accurate demand forecasting with inventory management, replenishment inventories can be scheduled to arrive just in time to replenish the product destined to run out first, while at the same time balancing out the inventory supply of all products to make their inventories more proportional, and thereby closer to achieving the primary goal. Accurate demand forecasting also allows the desired inventory proportions to be dynamic by determining expected sales out into the future; this allows for inventory to be in proportion to expected short-term sales or consumption rather than to past averages, a much more accurate and optimal outcome. Integrating demand forecasting into inventory management in this way also allows for the prediction of the "can fit" point when inventory storage is limited on a per-product basis.
[edit] Applications

The technique of inventory proportionality is most appropriate for inventories that remain unseen by the consumer. As opposed to "keep full" systems where a retail consumer would like to see full shelves of the product they are buying so as not to think they are buying something old, unwanted or stale; and differentiated from the "trigger point" systems where product is reordered when it hits a certain level; inventory proportionality is used effectively by just-in-time manufacturing processes and retail applications where the product is hidden from view. One early example of inventory proportionality used in a retail application in the United States is for motor fuel. Motor fuel (e.g. gasoline) is generally stored in underground storage tanks. The motorists do not know whether they are buying gasoline off the top or bottom of the tank, nor need they care. Additionally, these storage tanks have a maximum capacity and cannot be overfilled. Finally, the product is expensive. Inventory proportionality is used to balance the inventories of the different grades of motor fuel, each stored in dedicated tanks, in proportion to the sales of each grade. Excess inventory is not seen or valued by the consumer, so it is simply cash sunk (literally) into the ground. Inventory proportionality minimizes the amount of excess inventory carried in underground storage tanks. This application for motor fuel was first developed and implemented by Petrolsoft Corporation in 1990 for Chevron Products Company. Most major oil companies use such systems today.[7]
[edit] Roots

The use of inventory proportionality in the United States is thought to have been inspired by Japanese just-in-time parts inventory management made famouse by Toyota Motors in the 1980s.[3]

[edit] High-level inventory management


It seems that around 1880[8] there was a change in manufacturing practice from companies with relatively homogeneous lines of products to vertically integrated companies with unprecedented diversity in processes and products. Those companies (especially in metalworking) attempted to achieve success through economies of scope - the gains of jointly producing two or more products in one facility. The managers now needed information on the effect of product-mix decisions on overall profits and therefore needed accurate product-cost information. A variety of attempts to achieve this were unsuccessful due to the huge overhead of the information processing of the time. However, the burgeoning need for financial reporting after 1900 created unavoidable pressure for financial accounting of stock and the management need to cost manage products became overshadowed. In particular, it was the need for audited accounts that sealed the fate of managerial cost accounting. The dominance of financial reporting accounting over management accounting remains to this day with few exceptions, and the financial reporting definitions of 'cost' have distorted effective management 'cost' accounting since that time. This is particularly true of inventory. Hence, high-level financial inventory has these two basic formulas, which relate to the accounting period:
1. Cost of Beginning Inventory at the start of the period + inventory purchases within the period + cost of production within the period = cost of goods available 2. Cost of goods available cost of ending inventory at the end of the period = cost of goods sold

The benefit of these formulae is that the first absorbs all overheads of production and raw material costs into a value of inventory for reporting. The second formula then creates the new start point for the next period and gives a figure to be subtracted from the sales price to determine some form of sales-margin figure. Manufacturing management is more interested in inventory turnover ratio or average days to sell inventory since it tells them something about relative inventory levels.
Inventory turnover ratio (also known as inventory turns) = cost of goods sold / Average Inventory = Cost of Goods Sold / ((Beginning Inventory + Ending Inventory) / 2)

and its inverse


Average Days to Sell Inventory = Number of Days a Year / Inventory Turnover Ratio = 365 days a year / Inventory Turnover Ratio

This ratio estimates how many times the inventory turns over a year. This number tells how much cash/goods are tied up waiting for the process and is a critical measure of process reliability and effectiveness. So a factory with two inventory turns has six months stock on hand, which is generally not a good figure (depending upon the industry), whereas a factory that moves from six turns to twelve turns has probably improved effectiveness by 100%. This improvement will have some negative results in the financial reporting, since the 'value' now stored in the factory as inventory is reduced. While these accounting measures of inventory are very useful because of their simplicity, they are also fraught with the danger of their own assumptions. There are, in fact, so many

things that can vary hidden under this appearance of simplicity that a variety of 'adjusting' assumptions may be used. These include:
Specific Identification Weighted Average Cost Moving-Average Cost FIFO and LIFO.

Inventory Turn is a financial accounting tool for evaluating inventory and it is not necessarily a management tool. Inventory management should be forward looking. The methodology applied is based on historical cost of goods sold. The ratio may not be able to reflect the usability of future production demand, as well as customer demand. Business models, including Just in Time (JIT) Inventory, Vendor Managed Inventory (VMI) and Customer Managed Inventory (CMI), attempt to minimize on-hand inventory and increase inventory turns. VMI and CMI have gained considerable attention due to the success of third-party vendors who offer added expertise and knowledge that organizations may not possess.

[edit] Accounting for inventory

Accountancy
Key concepts Accountant Accounting period Bookkeeping Cash and accrual basis Cash flow forecasting Chart of accounts Journal Special journals Constant item purchasing power accounting Cost of goods sold Credit terms Debits and credits Double-entry system Mark-to-market accounting FIFO and LIFO GAAP / IFRS General ledger Goodwill Historical cost Matching principle Revenue recognition Trial balance Fields of accounting Cost Financial Forensic Fund Management Tax (U.S.) Financial statements Balance sheet Cash flow statement Statement of retained earnings Income statement Notes Management discussion and analysis XBRL Auditing Auditor's report Financial audit GAAS / ISA

Internal audit SarbanesOxley Act Accounting qualifications CA CPA CCA CGA CMA CAT CFA CIIA IIA CTP ACCA ICWAI This box: view talk edit

Each country has its own rules about accounting for inventory that fit with their financialreporting rules. For example, organizations in the U.S. define inventory to suit their needs within Generally Accepted Accounting Practices (GAAP), the rules defined by the Financial Accounting Standards Board (FASB) (and others) and enforced by the U.S. Securities and Exchange Commission (SEC) and other federal and state agencies. Other countries often have similar arrangements but with their own GAAP and national agencies instead. It is intentional that financial accounting uses standards that allow the public to compare firms' performance, cost accounting functions internally to an organization and potentially with much greater flexibility. A discussion of inventory from standard and Theory of Constraints-based (throughput) cost accounting perspective follows some examples and a discussion of inventory from a financial accounting perspective. The internal costing/valuation of inventory can be complex. Whereas in the past most enterprises ran simple, one-process factories, such enterprises are quite probably in the minority in the 21st century. Where 'one process' factories exist, there is a market for the goods created, which establishes an independent market value for the good. Today, with multistage-process companies, there is much inventory that would once have been finished goods which is now held as 'work in process' (WIP). This needs to be valued in the accounts, but the valuation is a management decision since there is no market for the partially finished product. This somewhat arbitrary 'valuation' of WIP combined with the allocation of overheads to it has led to some unintended and undesirable results.
[edit] Financial accounting

An organization's inventory can appear a mixed blessing, since it counts as an asset balance sheet, but it also ties up money that could serve for other purposes and requires additional expense for its protection. Inventory may also cause significant tax expenses, depending on particular countries' laws regarding depreciation of inventory, as in Power Tool Company v. Commissioner. Inventory appears as a current asset on an organization's balance sheet because the organization can, in principle, turn it into cash by selling it. Some organizations hold larger inventories than their operations require in order to inflate their apparent asset value and their perceived profitability. In addition to the money tied up by acquiring inventory, inventory also brings associated costs for warehouse space, for utilities, and for insurance to cover staff to handle and protect it from fire and other disasters, obsolescence, shrinkage (theft and errors), and others. Such holding costs can mount up: between a third and a half of its acquisition value

per year. Businesses that stock too little inventory cannot take advantage of large orders from customers if they cannot deliver. The conflicting objectives of cost control and customer service often pit an organization's financial and operating managers against its marketing departments. Salespeople, in particular, often receive sales-commission payments, so unavailable goods may reduce their potential personal income. This conflict can be minimised by reducing production time to being near or less than customers' expected delivery time. This effort, known as "Lean production" will significantly reduce working capital tied up in inventory and reduce manufacturing costs (See the Toyota Production System).
[edit] Role of inventory accounting

By helping the organization to make better decisions, the accountants can help the public sector to change in a very positive way that delivers increased value for the taxpayers investment. It can also help to incentivise progress and to ensure that reforms are sustainable and effective in the long term, by ensuring that success is appropriately recognized in both the formal and informal reward systems of the organization. To say that they have a key role to play is an understatement. Finance is connected to most, if not all, of the key business processes within the organization. It should be steering the stewardship and accountability systems that ensure that the organization is conducting its business in an appropriate, ethical manner. It is critical that these foundations are firmly laid. So often they are the litmus test by which public confidence in the institution is either won or lost. Finance should also be providing the information, analysis and advice to enable the organizations service managers to operate effectively. This goes beyond the traditional preoccupation with budgets how much have we spent so far, how much do we have left to spend? It is about helping the organization to better understand its own performance. That means making the connections and understanding the relationships between given inputs the resources brought to bear and the outputs and outcomes that they achieve. It is also about understanding and actively managing risks within the organization and its activities.
[edit] FIFO vs. LIFO accounting Main article: FIFO and LIFO accounting

When a merchant buys goods from inventory, the value of the inventory account is reduced by the cost of goods sold (COGS). This is simple where the CoG has not varied across those held in stock; but where it has, then an agreed method must be derived to evaluate it. For commodity items that one cannot track individually, accountants must choose a method that fits the nature of the sale. Two popular methods that normally exist are: FIFO and LIFO accounting (first in - first out, last in - first out). FIFO regards the first unit that arrived in inventory as the first one sold. LIFO considers the last unit arriving in inventory as the first one sold. Which method an accountant selects can have a significant effect on net income and book value and, in turn, on taxation. Using LIFO accounting for inventory, a company generally reports lower net income and lower book value, due to the effects of inflation. This generally results in lower taxation. Due to LIFO's potential to skew inventory value, UK GAAP and IAS have effectively banned LIFO inventory accounting.
[edit] Standard cost accounting

Standard cost accounting uses ratios called efficiencies that compare the labour and

materials actually used to produce a good with those that the same goods would have required under "standard" conditions. As long as similar actual and standard conditions obtain, few problems arise. Unfortunately, standard cost accounting methods developed about 100 years ago, when labor comprised the most important cost in manufactured goods. Standard methods continue to emphasize labor efficiency even though that resource now constitutes a (very) small part of cost in most cases. Standard cost accounting can hurt managers, workers, and firms in several ways. For example, a policy decision to increase inventory can harm a manufacturing manager's performance evaluation. Increasing inventory requires increased production, which means that processes must operate at higher rates. When (not if) something goes wrong, the process takes longer and uses more than the standard labor time. The manager appears responsible for the excess, even though s/he has no control over the production requirement or the problem. In adverse economic times, firms use the same efficiencies to downsize, rightsize, or otherwise reduce their labor force. Workers laid off under those circumstances have even less control over excess inventory and cost efficiencies than their managers. Many financial and cost accountants have agreed for many years on the desirability of replacing standard cost accounting. They have not, however, found a successor.
[edit] Theory of constraints cost accounting

Eliyahu M. Goldratt developed the Theory of Constraints in part to address the costaccounting problems in what he calls the "cost world." He offers a substitute, called throughput accounting, that uses throughput (money for goods sold to customers) in place of output (goods produced that may sell or may boost inventory) and considers labor as a fixed rather than as a variable cost. He defines inventory simply as everything the organization owns that it plans to sell, including buildings, machinery, and many other things in addition to the categories listed here. Throughput accounting recognizes only one class of variable costs: the truly variable costs, like materials and components, which vary directly with the quantity produced Finished goods inventories remain balance-sheet assets, but labor-efficiency ratios no longer evaluate managers and workers. Instead of an incentive to reduce labor cost, throughput accounting focuses attention on the relationships between throughput (revenue or income) on one hand and controllable operating expenses and changes in inventory on the other. Those relationships direct attention to the constraints or bottlenecks that prevent the system from producing more throughput, rather than to people - who have little or no control over their situations.

[edit] National accounts


Inventories also play an important role in national accounts and the analysis of the cycle. Some short-term macroeconomic fluctuations are attributed to the inventory cycle.

Distressed inventory
Also known as distressed or expired stock, distressed inventory is inventory whose potential to be sold at a normal cost has passed or will soon pass. In certain industries it could also mean that the stock is or will soon be impossible to sell. Examples of distressed inventory include products that have reached their expiry date, or have reached a date in advance of expiry at which the planned market will no longer purchase them (e.g. 3 months left to expiry), clothing that is defective or out of fashion, music that is no longer popular and old

newspapers or magazines. It also includes computer or consumer-electronic equipment that is obsolete or discontinued and whose manufacturer is unable to support it. One current example of distressed inventory is the VHS format.[9] In 2001, Cisco wrote off inventory worth US $2.25 billion due to duplicate orders. one of the biggest inventory write-offs in business history.

[edit] Inventory credit


Inventory credit refers to the use of stock, or inventory, as collateral to raise finance. Where banks may be reluctant to accept traditional collateral, for example in developing countries where land title may be lacking, inventory credit is a potentially important way of overcoming financing constraints. This is not a new concept; archaeological evidence suggests that it was practiced in Ancient Rome. Obtaining finance against stocks of a wide range of products held in a bonded warehouse is common in much of the world. It is, for example, used with Parmesan cheese in Italy.[11] Inventory credit on the basis of stored agricultural produce is widely used in Latin American countries and in some Asian countries.[12] A precondition for such credit is that banks must be confident that the stored product will be available if they need to call on the collateral; this implies the existence of a reliable network of certified warehouses. Banks also face problems in valuing the inventory. The possibility of sudden falls in commodity prices means that they are usually reluctant to lend more than about 60% of the value of the inventory at the time of the loan.

Kanban Systems
Kanban (kan-ban) - Theory and Practice: Introduction The concept behind the this lean manufacturing tool is to reduce costs in high volume production lines. One-way to do this is to smooth and balance material flows by means of controlled inventories. Translated as signal this allows an organization to reduce production lead-time, which in turn reduces the amount of inventory required. In order to determine optimum system designs, research often uses simulation to determine the number of Kbn's and to study various aspects of pull systems, see for example [1,2]. A heuristic design method has been designed by Ettl and Markus [3], which can be used to evaluate a Systems performance by using alternative network partitions and allocations of Kan-ban's. The heuristic design method integrates analytical techniques and a general-purpose genetic algorithm in order to model a System. The heuristic method provides us with a useful procedure for evaluating the impact of design alternatives and can therefore serve as a decision support tool for managers to use when planning a large-scale manufacturing system. What is Kan-ban ? By this point you may be asking, "What is a Kan-ban? A Kan-ban is a card containing all the information required to be done on a product at each stage along its path to completion and which parts are needed at subsequent processes. These cards are used to control work-in-progress (W.I.P.), production, and inventory flow.

A Kan-ban System allows a company to use Just-In-Time (J.I.T) Production and Ordering Systems that allow them to minimize their inventories while still satisfying customer demands. A Kan-ban System consists of a set of these cards, with one being allocated for each part being manufactured, that travel between preceding and subsequent processes.

Kan-ban System
The Kanban System was developed (more than 20 years ago), by Mr. Taiichi Ohno, a vice president of Toyota, to achieve objectives that include [4]:

o reducing costs by eliminating waste/scrap o try to create work sites that can respond to changes quickly o facilitate the methods of achieving and assuring quality control o design work sites according to human dignity, mutual trust and support, and allowing workers to reach their maximum potential. Why Kanban? Dramatic changes away from high product throughput and high capacity loads towards the new idea of lower production times and work-in-progress have lead to the idea of incorporating Kan-ban Systems in manufacturing industries (most notably in automotive industries). These systems are most commonly used to implement the pull-type control in production systems with aims at reducing costs by minimizing the W.I.P. inventory. This allows an organization the ability to adapt to changes in demand, and therefore production more quickly. A pull-type production line is a sequence of production stages performing various process steps on parts where each stage consists of several workstations in tandem. The flow of parts through the overall facility is controlled by a combined push/pull control policy, which is established by the Kan-bans. A push-type policy is used for producing parts within each individual production stage. However, parts are pulled between the production stages in accordance with the rate at which parts are being consumed by the downstream stages. Types of Kan-bans The two most common types of Kan-bans used today are: 1. Withdrawal (Conveyance) Kan-ban 2. Production Kan-ban Withdrawal (Conveyance) Kan-ban The main function of a withdrawal Kan-ban is to pass the authorization for the movement of parts from one stage to another. Once it gets the parts from the preceding process and moves them to the next process, remaining with the parts until the last part has been consumed by the next process. The withdrawal Kanban then travels back to the preceding process to get parts thus creating the cycle. A withdrawal Kanban usually carries the following information:

o part number o part name o lot size o routing process o name of the next process o location of the next process o name of the preceding process o location of the preceding process o container type o container capacity o number of containers released The withdrawal Kan-ban layout can be designed many ways in order to display this information. Production Kanban The primary function of the production Kan-ban is to release an order to the preceding stage to build the lot size indicated on the card. The production Kan-ban card should have the following information : o materials required as inputs at the preceding stage o parts required as inputs at the preceding stage o information stated on withdrawals Kan-ban The first two pieces of information are not required on the withdrawal Kan-ban as its only used for communicating the authorization of movement of parts between work stations. Flow of Kan-ban-Controlled Production Lines A kan-ban system consists of a tandem network of work stations, N, distributed amongst S production stages. Each production stage consists of one or more workstations and each has an unlimited local buffer for storing unfinished parts. In a production stage i, there are Ki kan-bans and Ni work stations. In order for a part to enter into production stage i, it must first acquire a free kan-ban (withdrawal kanban), Ki . Once the part has entered the work station, it receives a new production kan-ban which remains attached to the part while until all work steps associated with the kan-ban card have been completed. Once the part has completed the stage, the production kan-ban is removed once a withdrawal kan-ban becomes available. The part is then moved to the output buffer where is awaits a new kan-ban to move pull it along to the next production stage (i + 1). The kan-ban that was associated with the finished part is removed as soon as the part has been withdrawn by the next stage downstream. The newly unattached kan-ban is then returned to the input buffer where it serves as a pull signal for the upstream stage (i - 1). The kan-ban system produces only one type of part and performs under the assumption that an unlimited supply and demand of raw materials and finished products exists.

As a result of this assumption, no input buffer is necessary for the initial stage while no output buffer is required at the final stage. For a kan-ban system to operate at its maximum efficiency, it is best to use pre-determined lot sizes for the production of all parts. This allows you to minimize the setup and production costs as much as possible in this type of system. Kan-ban Preconditions Kan-ban is a essentially a tool that can be used to manage a work place effectively. As a result of its importance in the work place, six rules (or preconditions) have been developed to govern the operation of a kan-ban system. They are as follows : 1. no withdrawal of parts without a kan-ban 2. the subsequent process comes to withdraw only what is needed 3. do not send the defective part to the subsequent process 4. the preceding process should produce only the exact quantity of parts withdrawn by the subsequent process (ensures minimum inventory) 5. smoothing of production 6. fine tuning of production using kan-ban These rules are quite self-explanatory. For more information, refer to the reference indicated. Other types of Kanbans We also found three other types of kan-bans that exist for special circumstances only. They are discussed briefly as follows: 1. Express kan-ban - used when shortages of parts occur 2. Emergency kan-ban - used to replace defective parts and other uncertainties such as machine failures or changes in production volumes 3. Through kan-ban - used when adjacent work centers are located close to each other. It combines production and withdrawal kan-bans for both stages onto one, through, kan-ban Conclusion There are many advantages to using the JIT philosophy. Among the basic advantages of JIT are reduced finished goods and WIP inventory levels, shorter product flow times, and increased worker productivity allowing for lower production costs, and greater production customer responsiveness. JIT objectives are met by using pull-based production planning and control systems. The best known form of pull-control is our kan-ban control. Kan-ban is a simple-to-operate control system, which offers the opportunity to delegate routine material transactions on the shop floor. A number of attractive qualities contribute to the growing popularity of kan-ban control. Kanban is simple (mechanically) and relatively inexpensive to implement and operate.

Simply determining the quantity and location of kan-bans controls the amount of inventory. It is a distributed control system, where complex system behaviour is controlled by simple local rules. Visual controls provide a direct form of communication and make clear what must be done by managers, supervisors, and operators. Perhaps the most attractive aspect of kanban, and the one that often makes it difficult to implement, is its requirement for, and facilitation of, environmental improvement. The reduction of WIP and visual control makes problems more noticeable. This supports the saying that in order to eliminate waste, you must find it first. The tighter coupling between processes creates a dependence, which is lacking in many push environments and forces awareness to problems. Despite many sources of attractiveness, kan-ban control is not without its drawbacks. Kan-ban is often cited as being applicable only in certain environments. Issues of kanbans appropriateness in a particular production system revolve around general operating characteristics as well as environmental conditions. The general operating characteristics required can be summarized as the repetitive manufacturing of discrete units in large volumes which can be held relatively steady over a period of time. It is stated that kan-ban is difficult, or impossible to use when there are: (1) job orders with short production runs, or (2) significant set-ups, or (3) scrap loss, or (4) large, unpredictable fluctuations in demand. Even in spite of these problems, kanban will be the system many companies will and should use in the near future. Even in trying to establish an effective kan-ban, one must consider availability of relevant system information. that has a virtual organizational arrangement meaning it is an organization made up of contributors that add value to a particular product. As customers needs change VEs also change. Contributors that do not add enough value to the new product leave and terminate the virtual enterprise while other companies join to add valued to the new product. A virtual enterprise is valued since entering new markets is easier due to the fact that resources can be easily changed.[1] Rapid Prototyping This refers to the generation of a product so that design and functionality can be tested. In virtual prototyping the prototype is stored in a virtual environment like a computer rather than being physically created. This allows for observation, analysis and manipulation while it reduces costs and the time of product development.[1] Electronic Commerce Electronic Data Interchange helps companies to interchange business related information between computers within same companies, different companies, government organizations as well as banks. Using the internet, companies can go global and serve customers by

improving the response time through online systems.[1] Rapid Partnership In large companies work has to be distributed. This is done by making use of teams that work together with the appropriate partnership. Partnership involves the usage of partner companys potentials to come up with new ideas and concepts that lead to new products. Apart from the rapid response, good marketing skills are essential to make sure that the company remains competitive.[1] Physically Distributed Teams This unites partner enterprises located all over the world; thus having a distributed manufacturing environment. Every enterprise contributes to take advantage of a specific business opportunity and works against the market threat. Some reasons for change are the increasing flexibility of global transport, improvement in manufacturing processes and economic conditions for example the shifting of production to countries with cheap costs.[1] Concurrent Engineering CE ensures that new products are designed and all factors are concerned. This includes marketing, quality and serviceability. Customers tend to go for products which give them better assurance, service and testing processes. Several tools and techniques are available to improve the design and manufacturing of a product. This includes functionality analysis, solid modeling and finite element analysis.[1] Integrated Production Information System A fully integrated strategy system is required to set up a standard and avoid communication problems. This enables users from different firms with different capabilities to understand the shared data and information. Software representation includes data sources, data bases, packages and testing equipment.[1] nterprise This is an organization that has a virtual organizational arrangement meaning it is an organization made up of contributors that add value to a particular product. As customers needs change VEs also change. Contributors that do not add enough value to the new product leave and terminate the virtual enterprise while other companies join to add valued to the new product. A virtual enterprise is valued since entering new markets is easier due to the fact that resources can be easily changed.[1] Rapid Prototyping This refers to the generation of a product so that design and functionality can be tested. In virtual prototyping the prototype is stored in a virtual environment like a computer rather than being physically created. This allows for observation, analysis and manipulation while it reduces costs and the time of product development.[1] Electronic Commerce Electronic Data Interchange helps companies to interchange business related information between computers within same companies, different companies, government organizations as well as banks. Using the internet, companies can go global and serve customers by improving the response time through online systems.[1] Rapid Partnership In large companies work has to be distributed. This is done by making use of teams that

work together with the appropriate partnership. Partnership involves the usage of partner companys potentials to come up with new ideas and concepts that lead to new products. Apart from the rapid response, good marketing skills are essential to make sure that the company remains competitive.[1] Physically Distributed Teams This unites partner enterprises located all over the world; thus having a distributed manufacturing environment. Every enterprise contributes to take advantage of a specific business opportunity and works against the market threat. Some reasons for change are the increasing flexibility of global transport, improvement in manufacturing processes and economic conditions for example the shifting of production to countries with cheap costs.[1] Concurrent Engineering CE ensures that new products are designed and all factors are concerned. This includes marketing, quality and serviceability. Customers tend to go for products which give them better assurance, service and testing processes. Several tools and techniques are available to improve the design and manufacturing of a product. This includes functionality analysis, solid modeling and finite element analysis.[1] Integrated Production Information System A fully integrated strategy system is required to set up a standard and avoid communication problems. This enables users from different firms with different capabilities to understand the shared data and information. Software representation includes data sources, data bases, packages and testing equipment.[1]

You might also like