You are on page 1of 31

BTEC Higher National Certificate in Civil

Engineering
Unit 3: Applied Mathematics for Construction and the Built
environment

Assignment Three: Statistics


Student name: Daniel Cryer

Tutor Name: Roger Kendall

1
Contents

Introduction Page 3

Statistics Page 4-9

Standard deviation Page 9-11

Task 1 Page 12-13

Task 2 Page 14

Task 3 Page 14

Task 4 Page 14

Task 5 Pages 15

Task 6 Page 16-21

Task 7 Pages 22-24

Task 8 Pages 25-26

Task 9 Page 27-28

Task 10 Page 29-

Conclusion Page

Appendix Pages

Bibliography Page

2
Introduction

As the training officer for Land Surveys UK Ltd; we carry


out land and building surveys together with setting out for
construction and civils contracts. It is my task to prepare a report
covering certain tasks. This report will be given to new employees
in preparation for the individual to progress onto a formal
qualification.
They should be able to apply statistics to construction
problems using a variety of methods. These may include Statistical
methods such as tables and graphs, central tendency and
dispersion, distribution theory.

3
Statistics

Statistics is the science of the collection, organization, and


interpretation of data. It deals with all aspects of this, including the
planning of data collection in terms of the design of surveys and
experiments.

A statistician is someone who is particularly well versed in the


ways of thinking necessary for the successful application of
statistical analysis. Such people have often gained this experience
through working in any of a wide number of fields. There is also a
discipline called mathematical statistics, which is concerned with
the theoretical basis of the subject.

The diagram below shows a typical deviation graph/ probability


graph, showing that more probability density will be found the
closer one gets to the expected (mean) value in a normal
distribution. Statistics used in standardized testing assessment are
shown. The scales include standard deviations, cumulative
percentages, percentile equivalents, Z-scores, T-scores, standard
nines, and percentages in standard nines.

4
Statistics is considered by some to be a mathematical science
pertaining to the collection, analysis, interpretation or explanation,
and presentation of data, while others consider it a branch of
mathematics concerned with collecting and interpreting data.
Because of its empirical roots and its focus on applications,
statistics is usually considered to be a distinct mathematical
science rather than a branch of mathematics.

Statisticians improve the quality of data with the design of


experiments and survey sampling. Statistics also provides tools for
prediction and forecasting using data and statistical models.
Statistics is applicable to a wide variety of academic disciplines,
including natural and social sciences, government, and business.
Statistical consultants are available to provide help for
organisations and companies without direct access to expertise
relevant to their particular problems.

Statistical methods can be used to summarize or describe a


collection of data; this is called descriptive statistics. This is useful
in research, when communicating the results of experiments. In
addition, patterns in the data may be modeled in a way that
accounts for randomness and uncertainty in the observations, and
are then used to draw inferences about the process or population
being studied; this is called inferential statistics. Inference is a vital
element of scientific advance, since it provides a prediction (based
in data) for where a theory logically leads. To further prove the
guiding theory, these predictions are tested as well, as part of the
scientific method. If the inference holds true, then the descriptive
statistics of the new data increase the soundness of that
hypothesis. Descriptive statistics and inferential statistics together
comprise applied statistics.

Statistics is closely related to probability theory, with which it is


often grouped; the difference, roughly, is that in probability theory
one starts from given parameters of a total population, to deduce
probabilities pertaining to samples, while statistical inference,
moving in the opposite direction, is inductive inference from
samples to the parameters of a larger or total population.

In applying statistics to scientific, industrial, or societal problems, it


is necessary to begin with a population or process to be studied.
Populations can be diverse topics such as "all persons living in a
country" or "every atom composing a crystal". A population can
also be composed of observations of a process at various times,
with the data from each observation serving as a different member

5
of the overall group. Data collected about this kind of "population"
constitutes what is called a time series.

For practical reasons, a chosen subset of the population called a


sample is studied, as opposed to compiling data about the entire
group (an operation called census). Once a sample that is
representative of the population is determined, data is collected for
the sample members in an observational or experimental setting.
This data can then be subjected to statistical analysis, serving two
related purposes: description and inference.

• Descriptive statistics summarize the population data by


describing what was observed in the sample numerically or
graphically. Numerical descriptors include mean and
standard deviation for continuous data types (like heights or
weights), while frequency and percentage are more useful in
terms of describing categorical data (like race).
• Inferential statistics uses patterns in the sample data to draw
inferences about the population represented, accounting for
randomness. These inferences may take the form of:
answering yes/no questions about the data (hypothesis
testing), estimating numerical characteristics of the data
(estimation), describing associations within the data
(correlation) and modeling relationships within the data (for
example, using regression analysis). Inference can extend to
forecasting, prediction and estimation of unobserved values
either in or associated with the population being studied; it
can include extrapolation and interpolation of time series or
spatial data, and can also include data mining.

The concept of correlation is particularly noteworthy for the


potential confusion it can cause. Statistical analysis of a data set
often reveals that two variables of the population under
consideration tend to vary together, as if they were connected. For
example, a study of annual income that also looks at age of death
might find that poor people tend to have shorter lives than affluent
people. The two variables are said to be correlated; however, they
may or may not be the cause of one another. The correlation
phenomena could be caused by a third, previously unconsidered
phenomenon, called a lurking variable or confounding variable. For
this reason, there is no way to immediately infer the existence of a
causal relationship between the two variables.

For a sample to be used as a guide to an entire population, it is


important that it is truly a representative of that overall population.
Representative sampling assures that the inferences and

6
conclusions can be safely extended from the sample to the
population as a whole. A major problem lies in determining the
extent to which the sample chosen is actually representative.
Statistics offers methods to estimate and correct for any random
trending within the sample and data collection procedures. There
are also methods for designing experiments that can lessen these
issues at the outset of a study, strengthening its capability to
discern truths about the population.

Randomness is studied using the mathematical discipline of


probability theory. Probability is used in "Mathematical statistics” to
study the sampling distributions of sample statistics and, more
generally, the properties of statistical procedures. The use of any
statistical method is valid when the system or population under
consideration satisfies the assumptions of the method.

Misuse of statistics can produce subtle, but serious errors in


description and interpretation — subtle in the sense that even
experienced professionals make such errors, and serious in the
sense that they can lead to devastating decision errors. For
instance, social policy, medical practice, and the reliability of
structures like bridges all rely on the proper use of statistics. Even
when statistical techniques are correctly applied, the results can be
difficult to interpret for those lacking expertise. The statistical
significance of a trend in the data — which measures the extent to
which a trend could be caused by random variation in the
sample — may or may not agree with an intuitive sense of its
significance. The set of basic statistical skills that people need to
deal with information in their everyday lives properly is referred to
as statistical literacy.

Experimental and observational studies

A common goal for a statistical research project is to investigate


causality, and in particular to draw a conclusion on the effect of
changes in the values of predictors or independent variables on
dependent variables or response. There are two major types of
causal statistical studies: experimental studies and observational
studies. In both types of studies, the effect of differences of an
independent variable (or variables) on the behavior of the
dependent variable are observed. The difference between the two
types lies in how the study is actually conducted. Each can be very
effective. An experimental study involves taking measurements of
the system under study, manipulating the system, and then taking
additional measurements using the same procedure to determine if
the manipulation has modified the values of the measurements. In

7
contrast, an observational study does not involve experimental
manipulation. Instead, data are gathered and correlations between
predictors and response are investigated.

Experiments

The basic steps of a statistical experiment are:

1. Planning the research, including finding the number of


replicates of the study, using the following information:
preliminary estimates regarding the size of treatment effects,
alternative hypotheses, and the estimated experimental
variability. Consideration of the selection of experimental
subjects and the ethics of research is necessary.
Statisticians recommend that experiments compare (at least)
one new treatment with a standard treatment or control, to
allow an unbiased estimate of the difference in treatment
effects.
2. Design of experiments, using blocking to reduce the
influence of confounding variables, and randomized
assignment of treatments to subjects to allow unbiased
estimates of treatment effects and experimental error. At this
stage, the experimenters and statisticians write the
experimental protocol that shall guide the performance of the
experiment and that specifies the primary analysis of the
experimental data.
3. Performing the experiment following the experimental
protocol and analyzing the data following the experimental
protocol.
4. Further examining the data set in secondary analyses, to
suggest new hypotheses for future study.
5. Documenting and presenting the results of the study.

Levels of measurement there are four main levels of


measurement used in statistics: nominal, ordinal, interval,
and ratio. Each of these has different degree of usefulness in
statistical research. Ratio measurements have both a
meaningful zero value and the distances between different
measurements defined; they provide the greatest flexibility in
statistical methods that can be used for analyzing the data.
Interval measurements have meaningful distances between
measurements defined, but the zero value is arbitrary (as in
the case with longitude and temperature measurements in
Celsius or Fahrenheit). Ordinal measurements have
imprecise differences between consecutive values, but have

8
a meaningful order to those values. Nominal measurements
have no meaningful rank order among values.

Because variables conforming only to nominal or ordinal


measurements cannot be reasonably measured numerically,
sometimes they are grouped together as categorical variables,
whereas ratio and interval measurements are grouped together as
quantitative or continuous variables due to their numerical nature.

Error also refers to the extent to which individual observations in a


sample differ from a central value, such as the sample or
population mean. Many statistical methods seek to minimize the
mean-squared error, and these are called "methods of least
squares."

Measurement processes that generate statistical data are also


subject to error. Many of these errors are classified as random
(noise) or systematic (bias), but other important types of errors
(e.g., blunder, such as when an analyst reports incorrect units) can
also be important.

Statistics rarely give a simple Yes/No type answer to the question


asked of them. Interpretation often comes down to the level of
statistical significance applied to the numbers and often refers to
the probability of a value accurately rejecting the null hypothesis
(sometimes referred to as the p-value).

Referring to statistical significance does not necessarily mean that


the overall result is significant in real world terms. For example, in
a large study of a drug it may be shown that the drug has a
statistically significant but very small beneficial effect, such that the
drug will be unlikely to help the patient in a noticeable way. For this
reason we should be careful when using statistics to correctly
interpret the data and to make sure that we come to the correct
conclusion.

Standard deviation

Standard deviation is a widely used measurement of variability or


diversity used in statistics and probability theory. It shows how
much variation or "dispersion" there is from the "average" (mean,
or expected/budgeted value). A low standard deviation indicates
that the data points tend to be very close to the mean, whereas
high standard deviation indicates that the data are spread out over
a large range of values.

9
Technically, the standard deviation of a statistical population, data
set, or probability distribution is the square root of its variance. It is
algebraically simpler though practically less robust than the
average absolute deviation. A useful property of standard deviation
is that, unlike variance, it is expressed in the same units as the
data. Note, however, that for measurements with percentage as
unit, the standard deviation will have percentage points as unit.

In addition to expressing the variability of a population, standard


deviation is commonly used to measure confidence in statistical
conclusions. For example, the margin of error in polling data is
determined by calculating the expected standard deviation in the
results if the same poll were to be conducted multiple times. The
reported margin of error is typically about twice the standard
deviation — the radius of a 95 percent confidence interval. In
science, researchers commonly report the standard deviation of
experimental data, and only effects that fall far outside the range of
standard deviation are considered statistically significant — normal
random error or variation in the measurements is in this way
distinguished from causal variation. Standard deviation is also
important in finance, where the standard deviation on the rate of
return on an investment is a measure of the volatility of the
investment.

When only a sample of data from a population is available, the


population standard deviation can be estimated by a modified
quantity called the sample standard deviation, explained below.

Mean and median

Comparison of common averages


Type Description Equation Example Result
Total sum
divided by
Arithmetic mean (1+2+2+3+4+7+9) / 7 4
number of
values
Middle value
that
separates
Median the greater 1, 2, 2, 3, 4, 7, 9 3
and lesser
halves of a
data set
Most
frequent
Mode 1, 2, 2, 3, 4, 7, 9 2
number in a
data set

In statistics, mean has two related meanings:


10
• The arithmetic mean (and is distinguished from the
geometric mean or harmonic mean).
• The expected value of a random variable, which is also
called the population mean.

There are other statistical measures that use samples that some
people confuse with averages - including 'median' and 'mode'.
Other simple statistical analyses use measures of spread, such as
range, interquartile range, or standard deviation. For a real-valued
random variable X, the mean is the expectation of X. Note that not
every probability distribution has a defined mean (or variance); see
the Cauchy distribution for an example.

For a data set, the mean is the sum of the values divided by the
number of values. The mean of a set of numbers x1, x2, ..., xn is
typically denoted by , pronounced "x bar". This mean is a type of
arithmetic mean. If the data set was based on a series of
observations obtained by sampling a statistical population, this
mean is termed the "sample mean" to distinguish it from the
"population mean". The mean is often quoted along with the
standard deviation: the mean describes the central location of the
data, and the standard deviation describes the spread. An
alternative measure of dispersion is the mean deviation, equivalent
to the average absolute deviation from the mean. It is less
sensitive to outliers, but less mathematically tractable.

If a series of observations is sampled from a larger population


(measuring the heights of a sample of adults drawn from the entire
world population, for example), or from a probability distribution
which gives the probabilities of each possible result, then the
larger population or probability distribution can be used to
construct a "population mean", which is also the expected value for
a sample drawn from this population or probability distribution. For
a finite population, this would simply be the arithmetic mean of the
given property for every member of the population. For a
probability distribution, this would be a sum or integral over every
possible value weighted by the probability of that value. It is a
universal convention to represent the population mean by the
symbol μ. In the case of a discrete probability distribution, the
mean of a discrete random variable x is given by taking the
product of each possible value of x and its probability P(x), and
then adding all these products together, giving .

The sample mean may differ from the population mean, especially
for small samples, but the law of large numbers dictates that the

11
larger the size of the sample, the more likely it is that the sample
mean will be close to the population mean.

12
Task 1

1. For the slump results tabulated, plot a percentage cumulative


frequency distribution diagram and use it to estimate.
a) The median slump value
b) The number of slumps having a value less than 44mm
c) The number of slumps having a value greater than 56mm.

Slump Fequency Cumulative :Percentage


Measure Frequency Frequency
30-35 1 1 20
35-40 6 7 13
40-45 7 14 27
45-50 11 25 48
50-55 13 38 73
55-60 9 47 90
60-65 3 50 96

Using the data provided it is possible to calculate the percentage


frequency for the given slump values. These values can then be
plotted to give the graph shown on the next page. It is then
possible to us this graph to estimate the median and other values
that may be required.

a) Median value taken from graph = 52

b) So by using the graph to find the percentage cumulative


frequency for a slump of less than 44mm. We can see that
the percentage is 21 %. We can then use this figure to work
backwards and arrive at a figure for an actual slump. The
figure must be a whole number so it must
52
× 21 =10 .92 be rounded up to give 11 slumps with a
100 value less than 44mm

c) Again by using the graph to find the percentage cumulative


frequency for a slump of greater than 56mm, we can se that
the value is 82 %. In this case though it is everything greater
that 56mm so this value of 82% needs to be taken away from
100%.

13
Again we must work backwards from this percentage to give an
actual number

52
100 − 82 = 18 ×18 = 9.36
100

The figure must also be a whole number must also be rounded up


given a value of 10 slumps with a value greater then 56mm.

14
Task 2

The crushing strength for concrete cubes is given in the following


frequency distribution table. Draw the histogram and show the
frequency polygon for the data.

Crushing Strength Frequency


(N/mm²)
19.42 – 19.44 11
19.44 – 19.47 41
19.48 – 19.50 84
19.51 – 19.53 45
19.54 – 19.56 9

Answer shown on graph

Task 3
Density tests on 30 timber specimens produce the following data.
Draw the histogram and calculate the arithmetic mean.

Density(kg/m³) Frequency
570 2
580 3
590 5
600 9
610 6
620 2
630 3

Answer shown on graph

Task 4
The crushing strengths of 60 insulation blocks were determined
and the results are recorded below. Draw the histogram and the
frequency polygon.

Crushing Strength(N/mm²) Frequency


6.6 1
6.8 5
7.0 14
7.2 20
7.4 15
7.6 3
7.8 2
15
Task 5

Determine the mean and standard deviation for the cube crushing
strength results at 7 days.

Class X Xc F XcF Xc²F


20-22 21 -2 4 -8 16
22-24 23 -1 7 -7 7
24-26 25 0 14 0 0
26-28 27 1 6 6 6
28-30 29 2 5 10 20
Total 36 1 49

∑ xcf
xc = = 1 = 0.027
∑f 36

x = 25 + 0.027 × 2 = 25 .054

2
∑ xc f
σc = − ( xc ) 2 =
49
− (0.027 ) 2 =1.166
∑f 36

σ = 1.166 × 2 = 2.333

So the standard deviation for this set of data is 2.333.

And the mean for selected data is 25.054.

Task 6

When examining the relationship between slump and strength,


determine the probability of the following
a) A mix having a slump less than 50mm and a cube strength at
28 days greater than 50N/mm²

16
b) A mix sampled producing a cube strength result at 14 days
less than 36N/mm²and then a cube strength result less than
50N/mm².

a)

Slump less than 50mm

Class X Xc F XcF Xc²F


30-35 32.5 -4 1 -4 16
35-40 37.5 -3 6 -18 54
40-45 42.5 -2 7 -14 28
45-50 47.5 -1 11 -11 11
50-55 52.5 0 13 0 0
55-60 57.5 1 9 9 9
60-65 62.5 2 3 6 12
65-70 67.5 3 2 6 18
Total 52 -26 148

Estimate the mean to be 52.5


Therefore

∑ xcf
xc = = − 26 = −0.5
∑f 52

Corrected Mean = 52.5-(0.5*5)=50

2
∑ xc f
σc = − ( xc ) 2 =
148
− (0.5) 2 =1.611
Standard Deviation= ∑f 52

S.D = 1.611*class width = 1.611 * 5 = 8.055


So probability = Difference from mean ÷ Standard Deviation
50 − 50 0
= =0
8.055 8.055
So using the probability chart we can see that a standard deviation
from the mean of zero fives a probability of 0.5

28 day cube strength

17
Class X Xc F XcF Xc²F
40-45 42.5 -2 2 -4 8
45-50 47.5 -1 2 -2 2
50-55 52.5 0 4 0 0
55-60 57.5 1 8 8 8
60-65 62.5 2 2 4 8
Total 18 6 26

Estimated mean = 52.5


∑ xcf
xc = = 6 = 0.333
∑f 18
Corrected Mean = 52.5 + (0.333 * 5) = 54.167

2
∑ xc f
σc = − ( xc ) 2 =
26
− (0.333 ) 2 =1.155
∑f 18

S.D = 1.155*class width = 1.155 * 5 = 5.77

So probability = Difference from mean ÷ Standard Deviation

54 .167 − 50 4.167
= = 0.722
5.77 5.77

Using the probability table to find the probability value for strength
of greater that 50 N/mm²
0.722 = 0.2642 = 26.42 % chance.
As the question asks for probability of greater than 50N/mm² we
must add this figure to the rest of the probability curve.

So 0.5 + 0.2642 = 0.7642 = 76.42 % chance

We must then find the final probability for both these.


So 0.7642 * 0.5 = 0.3821 = 38.21 % chance of there being a mix
having a slump less than 50 mm and a cube strength at 28 days
greater than 50N/mm².

Another way of doing this would be to use ratio’s so as we can see


from the table below

Slump Results

18
A slump of less than 50mm would fall in all columns in and below
45- 50

Cube results at 28 days

Crushing 40-45 45-50 50-55 55-60 60-65


strength
N/mm²
Number 2 2 4 8 2

A cube result at 28 days of greater than 50N/mm² would fall in all


columns above 50-55

Slump 30 35- 40- 45- 50- 55- 60- 65-


Measured(m - 40 45 50 55 60 65 70
m) 35
Number 1 6 7 11 13 9 3 2
We can use these to calculate ratios and therefore the probability

25 14
So 52for a slump of less than 5omm and 18 for cube strength at
28 days greater than 50N/mm

We need to then multiply these together to give a final probability


for both
25 14 175
× = = 0.374 = 37.4%
So 52 18 438

We can see that both these methods come up with a similar


answer but the later method is far quicker but as a consequence is
less accurate.

b)
Using this simpler method we can see that the cube strength at 14
days of less than 36N/mm² is

Cube results at 14 days

Crushing 30-32 32-34 34-36 36-38 38-40


Strength
N/mm²
Number 2 4 11 11 8

Cube results at 28 days

19
Crushing 40-45 45-50 50-55 55-60 60-65
strength
N/mm²
Number 2 2 4 8 2
18 4 1
× = 0.5 × 0.2 = = 0.1111 = 11 .11 %
36 18 9

If instead we use the longer method we can see:

Class X Xc F XcF Xc²F


30-32 31 -2 2 -4 8
32-34 33 -1 4 -4 4
34-36 35 0 0 0 0
36-38 37 1 11 11 11
38-40 39 2 16 16 32
Total 36 19 55

Estimated Mean = 35

∑ xcf
xc = = 19 = 0.527
∑f 36
Corrected Mean x = 35 + (0.527 * 2) = 36.05
2
∑ xc f
σc = − ( xc ) 2 =
55
− (0.527 ) 2 = 1.527 − 0.277729 =1.118
∑f 36

σ =1.118 ×stripwidth =1.118 ×2 = 2.236

Using this figure for the standard deviation we can then calculate
the probability from the difference from mean ÷ Standard deviation
36 .05 − 36
= 0.02236
So 2.236

Use this figure in the probability table to give an answer of 0.008


The question asks for cube strength less than 36N/mm²

We have calculated the


probability of the cubes lying
between 36 and 36.05 as
0.008
Therefore the probability of a
cube lying below this is
20
0.5-0.008=0.492

We must then calculate the result for cube strength less than
50N/mm² at 28 days.

Class X Xc F XcF Xc²F


40-45 42.5 -2 2 -4 8
45-50 47.5 -1 2 -2 2
50-55 52.5 0 4 0 0
55-60 57.5 1 8 8 8
60-65 62.5 2 2 8 8
Total 18 6 26

21
Estimated Mean = 52.5

∑ xcf 6
xc = = = 0.333
∑f 18

52 .5 + (0.333 ×5) = 54 .167


Corrected Mean x =

2
∑ xc f 26
σc = − ( xc ) 2 = − (0.333) 2 = 1.155
∑f 18

Standard deviation = 1.155 * class width = 1.155 * 5 = 5.77

Probability = Difference from mean ÷ Standard Deviation

54 .167 − 50
= 0.722
= 5.77
Using this figure in the probability table we get a figure of
0.722=0.2642.

We have calculated the


probability of the cubes lying
between 50 and 54.167 as
0.2642
Therefore the probability of a
cube lying below this is
0.5-0.2642=0.2358

Finally to get the probability of cubes having strength less than


36N/mm² at 14 days getting cube strength of less than 50N/mm² at
28 days we must multiply the two probabilities together.

0.492 × 0.2358 = 0.116 = 11 .6%

22
Task 7

The heights of 120 pivot blocks were measured.

Height x 29.4 29.5 29.6 29.7 29.8 29.9


(mm)
Frequency 6 25 34 32 18 5
f
Calculate the mean height and the standard deviation.

For a batch of 2500 such blocks, calculate


a) The limits between which all the heights are likely to lie
b) The number of blocks with height greater than 29.52mm.

If we use this table we can construct a standard deviation table


as shown below

Class X Xc F XcF Xc²F


40-45 42.5 -2 2 -4 8
45-50 47.5 -1 2 -2 2
50-55 52.5 0 4 0 0
55-60 57.5 1 8 8 8
60-65 62.5 2 2 4 8
Total 18 6 26

We can estimate the mean to be 29.6

The standard deviation for this data is calculated below

∑ xcf
xc = = 46 = 0.383
∑f 120
So corrected mean is
x = 29 .6 + (0.383 ×0.1) = 29 .6383 N / mm 2
2
∑ xc f
σc = − ( xc ) 2 =
198
− (0.383 ) 2 =1.2261
∑f 120

So standard deviation is this figure multiplied by the strip width

σ = 1.2261 × 0.1 = 0.12261 N / mm 2

a) To work out the limits at which the values are likely to lay we
must calculate the upper and lower limits from the mean. We do
23
this by taking the value for the mean and adding or subtracting 3
σ

So we can see that the upper limit and lower limit for this data is
as follows.
29 .6 +3σ = 29 .6 + (0.1×3) = 29 .96
Upper Limit

29 .6 − 3σ = 29 .6 + (0.1 ×3) = 29 .24 Lower Limit

b) The number of blocks with a height greater than 29.52mm can


be
calculated
by
calculating
the
probability
of this
occurring.

Probability= Difference from mean ÷ Standard Deviation

29 .63 − 29 .52 0.11


= = 0.897
0.12261 0.12261

Use the probability table to give 0.897=0.3133

As we can see from the


diagram everything above
29.52 is required so we need
to add this figure to 0.5
0.5 + 0.3133 = 0.813 = 81 .3%

So in 2500 blocks we can see


that 2500 * 0.813 = 2032.5 it
is impossible to have half a
block so 2033 blocks will have
a height greater than 29.5n
24
25
Task8.
Components are machined to a nominal diameter of 32.65mm. A
sample batch of 400 components gave a mean diameter of
32.66mm with a standard deviation of 0.02mm. For a production
total of 2400 components, calculate
a) The limits between which all the diameters are likely to lie
b) The number of acceptable components if those with
diameters less than 32.62mm or greater than 32.68mm are
rejected.

a) We are given the mean diameter of 32.66mm and a standard


deviation of 0.02mm so it is possible to calculate the upper
and lower limits for which the diameters are likely to lay. We
do this by taking the value for the mean and adding or
subtracting 3 σ (standard deviations)

So 32 .66 + (0.02 ×3) = 32 .66 + 0.06 = 32 .72 Upper Limit

And 32 .66 − (0.02 ×3) = 32 .66 −0.06 = 32 .6 Lower Limit

So all diameters will lay between 32.6mm and 32.72mm

a)

Probability = Difference from mean ÷ Standard deviation

32 .66 − 32 .62 0.04


= =2
So 0.02 Then
0.02
using the probability table we can
see that 2 = 0.4773. From the
diagram we can see that we
have calculated the probability of
the components having a
diameter between 32.62mm and
the mean of 32.66mm but we
require the probability of the
components less than 32.62mm
so we need to subtract it from the
mean probability of 0.5
So 0.5 – 0.4733 = 0.0227 then multiply this by the number of
components giving the amount of components rejected for being
below 32.62.

2400 × 0.0227 = 54 .48


Rejected
26
Probability = Difference from mean ÷ Standard deviation
32 .68 − 32 .66 0.02
= =1
So 0.02 0.02
Then using the probability table
we can see that 1 = 0.3413
From the diagram we can see
that we have calculated the
probability of the components
having a diameter between
32.68mm and the mean of
32.66mm but we require the
probability of the components
greater than 32.68mm so we
need to subtract it from the mean probability of 0.5
So 0.5 – 0.3413 = 0.1587 then multiply this by the number of
components giving the amount of components rejected for being
above 32.68

2400 × 0.1587 = 380 .88


Rejected

So the total number of rejected components is 380.88 + 54.48 =


435.36 = 436 components rejected

And therefore 2400 – 436 = 1964 Total acceptable components

27
Task 9

A machine is set to produce bolts of nominal diameter 25.0mm.


Measurement of the diameters of 60 bolts gave the following
frequency distribution:

Diameter 23. 23. 24. 24. 25. 25. 26.


x (mm) 3– 8– 3– 8– 3– 8– 3–
23. 24. 24. 25. 25. 26. 26.
7 2 7 2 7 2 7
Frequency 2 4 10 17 16 8 3
f

Calculate the mean and the standard deviation from the mean.

For a full run of 3000, calculate the limits between which all the
diameters are likely to lie. Also calculate the number of bolts with
diameters less than 24.45mm.

Class X Xc F XcF Xc²F


23.3-23.7 23.5 -3 2 -6 18
23.8-24.2 24.0 -2 4 -8 16
24.3-24.7 24.5 -1 10 -10 10
24.8-25.2 25.0 0 17 0 0
25.3-25.7 25.5 1 16 16 16
25.8-26.2 26.0 2 8 16 32
26.3-26.7 26.5 3 3 9 27
Total 60 17 119

Estimated Mean = 25.0


∑ xcf 17
xc = = = 0.283
∑f 60
Corrected Mean = x = 25 + (0.283 ×0.5) = 25 .1415 mm 2

2
∑ xc f 119
σc = − ( xc ) 2 = − (0.283) 2 = 1.379
∑f 60

So standard deviation is this figure multiplied by the strip width


σ = 1.379 × 0.5 = 0.6895

So the limits at which all diameters are likely to lay are calculated
by adding or subtracting 3 standard deviations to the mean
28
25 .1415 + (0.6895 ×3) = 25 .1415 + 2.0685 = 27 .21 Upper Limit

25 .1415 − (0.6895 ×3) = 25 .1415 − 2.0685 = 23 .073 Lower Limit

So the limits at which all diameters are likely to lie are 23.073mm
and 27.021mm

To calculate the number of


bolts with diameters less than
24.45mm, we must find the
probability for this.
Probability = Difference from
mean ÷ Standard deviation
So
25 .1415 − 24 .45 0.6915
= =1.003
0.6895 0.6895

From the table of probability


we can see that 1.00 gives a probability of 0.3413.

So 0.3413 is probability of diameters between 24.45mm and the


mean of 25.1415mm. We must subtract this from the mean
probability of 0.5 to give the probability of the diameters being less
than 24.45mm.

0.5 − 0.341 = 0.1587 = 15 .87 %

So from this we can see that in 3000 blocks 15.87 % will have a
diameter less than 24.45mm

3000 × 0.1587 = 476 .1 = 477 blocks

29
Task 10
Evaluate the methods used in tasks 1 through to 9 and their
suitability within the construction industry.

The methods employed in the previous tasks can be a very


effective means of analysis. They can be used to evaluate many
different types of data and can be used widely in the construction
industry. For example it may be possible to select a concrete
company based upon the statistical figures they show about the
strength of their concrete and therefore choose the company with
the lowest standard deviation and thus the probability of delivering
the best concrete. These methods can be used as a monitor
between different companies allowing us to choose the best
business, to maximise profits and minimise on waste.

In the construction industry, value engineering, value analysis and


value management are all titles used to describe a structured
process of examination of the function of a building to ensure that
it is delivered in the most cost-effective way.

A Performance Indicator or Key Performance Indicator (KPI) is an


industry jargon term for a type of Measure of Performance. KPIs
are commonly used by an organization to evaluate its success or
the success of a particular activity in which it is engaged.
Sometimes success is defined in terms of making progress toward
strategic goals, but often, success is simply the repeated
achievement of some level of operational goal (zero defects, 10/10
customer satisfaction etc.). Accordingly, choosing the right KPIs is
reliant upon having a good understanding of what is important to
the organization. 'What is important' often depends on the
department measuring the performance - the KPIs useful to a
Finance Team will be quite different to the KPIs assigned to the
sales force, for example. Because of the need to develop a good
understanding of what is important, performance indicator
selection is often closely associated with the use of various
techniques to assess the present state of the business, and its key
activities. These assessments often lead to the identification of
potential improvements; and as a consequence, performance
indicators are routinely associated with 'performance improvement'
initiatives. A very common method for choosing KPIs is to apply a
management framework such as the Balanced Scorecard.

30
Bibliography
Virdi, 2006. Construction Mathematics. Butterworth-Heinemann.

Greer, 1989. BTEC National Nil I - Mathematics for


Technicians. 2nd ed. Nelson Thornes.

Taylor, 2004. BTEC National Mathematics for

Technicians. 3rd ed. Third Edition Nelson Thornes

Greer, 1982. BTEC First - Mathematics for Technicians. 2nd ed.


Nelson Thornes.

Tourret, 1997. Applying Maths in Construction. Butterworth-


Heinemann.

http://en.wikipedia.org/

31

You might also like