Professional Documents
Culture Documents
The reports contained in this booklet are designed to be informative and helpful both to
candidates and those responsible for preparing them. The Examinations Committee will
be pleased to receive any comments or constructive criticism on the content of the
reports, either of a general nature or relating to a particular subject.
The Committee emphasises that the individual subject reports should be read in
conjunction with the appropriate question papers. These are available from the
Engineering Council Examinations department at City & Guilds in sets for Part
1/Certificate examinations and individually for Part 2/Graduate Diploma examinations.
Subject Number Grade Grade Grade Grade Grade Grade Average %Pass
of
Cands A B C D E F Mark Rate
PART 2(A)/
GRADUATE
DIPLOMA
UK
PART 2(B)
O'SEAS
PART 2(B)
General Comments:
In May 2003 eighty four candidates attempted the examination in D201 – Applied
Thermodynamics, of the eighty four candidates, thirty one were awarded a mark of forty
percent or higher. The average mark on the examination was 33.7%.
A most interesting feature of the examination scripts is that some candidates simplified
certain questions (e.g Q1 and Q5) either by making convenient assumptions or by
ignoring inconvenient data.
Q1
For the throttling process as well as h3 = h4, some candidates assumed s3 = s4. Also
candidates had difficulty in converting pressures in MPa to pressures in bar. Sub-cooling
of the condensate was ignored frequently.
Q2
Candidates had problems in determining the heat transfer in the heat exchanger.
Q3
Many candidates worked in kg rather than in kmole, that is, they assumed the analysis to
be via mass.
Q4
Some candidates had no understanding of the make up of an air-standard dual
combustion cycle. Also some candidates assumed that the compression ratio was p2 / p1,
rather than v1 / v2.
Q5
Some candidates assumed that the temperature of condensation was 60?c rather than
50.85?c, that is T3 (= Tg2). By assuming 60?c made the evaluation of other properties
easy, but incorrect. Also some candidates did not understand the measuring of sub-
cooling.
Q6
Gibb, Maxwell and thermodynamic relations had little appeal, which is unfortunate as
they are the essence of thermodynamics. Calculation of the polytropic index n gave
problems to many candidates.
Q8
Cooling tower questions involving relative humidities require accurate, precise,
calculations. One is dealing with very small pressures, for example:
Ps1 = 1 Psat1 = 0.4 x 0.01704 = 0.006816 bar
and Ps2 = 2 Psat2 = 0.98 x 0.04242 = 0.04157 bar
therefore, w1 = 0.622 0.006816
= 4.2686 x10 3
1 0.006816
and w2 = 0.622 0.04157 = 26.9780 x10 3
1 0.04157
giving w2 = 6.3201 = pis 2 . This ultimately, leads to 1.995% being the percentage loss of
w1 pis1
water giving 399 kg/s as the make up water required. If only two significant figures were
used, say, for Psat1 and Psat2 the end product would be somewhat different from 399
kg/s.
Q9
Of those candidates who attempted this question some did very well with the selection of
technology relevant to the third world.
Q1
This question on heat exchangers was done by 11 candidates, many obtaining
reasonable marks. The descriptive part was satisfactory but the calculations less so.
Errors were frequently the result of incorrect reading of the thermodynamic tables, e.g.
taking gas rather than liquid properties. No interpolations were necessary. Candidates
should make sure that they are familiar with the use of tables of properties. The heat
exchanger could be taken as co- or counter-flow.
The answers below are for counter-flow.
Q2
Very few candidates attempted this question. The first part, on aspects of radiation, was
bookwork but was poorly done. Reading values from the tables again proved
problematic for many candidates. Students should make sure that they are competent in
the use of data from tables. The question was based on a simple energy balance
between radiation and convection.
[Answer: T=1127 K]
Q4
Ten candidates did this question, most of them obtaining marks of 19/25 or higher.
Clearly heat transfer with cylindrical symmetry is being well taught.
.
[Answers: (b) (i) Re=13080 (ii) Pr=11.8 (iii) Nu=101.9 (iv) q =29.0 W m-1 (v) 0.0179 K s-1]
Q5
This question on humidity was attempted by few students. Part (a) was bookwork. Part
(b) required the application of the perfect gas equation to each component.
Q6
Very few candidates attempted this question, only one obtaining a pass.
[Answers: (a) H=0.184 (b) p= 1.19 kg m-3, Dg=l .16 x 10-5 m2 s-1, Sh=16.3, kg=4.73 x 10-3
m s-1 (c) (i) KOG=4.88 x 10-4 m s-1 (ii) 3.72 x 10-6 kmol m-2 s-1]
Q7
Very few candidates attempted this question.
Q8
Very few candidates attempted this question.
[Answers: (b) 1.47 x 10-3 kmol m-3 (c) 1.18 x 10-8 m s-1, roughly 10 days (d) 1.14 s]
General Comments:
The pass rate and average mark have risen considerably compared with the results in
2002. However, this has been accompanied by a sharp fall in the number of candidates
taking the examination (from 121 in 2002 to 79). The average mark continues to be
greatly affected by a large number of very poor attempts made by inadequately prepared
candidates. The quality of answers given by the best candidates was very good, both in
terms of demonstrating their understanding of fluid mechanics and by the attention given
to accurate derivations and numerical calculations. However, this was accompanied by
many poor or very poor and often incomplete answers from other candidates. A
common problem among these candidates was the frequent introduction of careless
Q1
32.9 % of candidates attempted this question.
(a) Rather than derive the form of expression given for the speed of sound, many
candidates simply demonstrated that the expression was dimensionally consistent by
starting with the desired result and introducing the dimensions of the variables into the
expression.
(b) Many attempts to derive the expression given used the Buckingham Pi theorem but
were unable to accurately describe the dimensions of the input variables, particularly, R,
, K and g. In addition, the algebraic manipulation of index variables was frequently in
further error by either poor transcription of earlier work or through algebraic errors.
(c) Few candidates attempted this part of the question even though it did not require
successful completion of the earlier parts. This is a common observation with numerical
parts of questions in the paper. Candidates should be encouraged to attempt any
numerical parts of questions as a means of gaining some marks.
Q2
92.4 % of candidates attempted this question.
Parts (a) and (b) of this question were answered well by many candidates, with accurate
use being made of the continuity and Bernoulli equations. However, many other
attempts also contained careless arithmetic errors in the solutions, often producing
results which were clearly wrong or impractical. In such situations, candidates should be
encouraged to make a sensible judgement about the feasibility of the numerical results
they generate and, if necessary, go back and check their own work. This could avoid an
unnecessary loss of marks.
In part (c), there was little or no differentiation between the force of the pipework on the
water and the force of the water on the pipework. It was common for candidates to
introduce symbols for forces, Fx and Fy without clearly indicating their meaning.
Candidates should realise that the momentum equation is used for the change in
Q3
46.2 % of candidates attempted this question.
(a) Many candidates were uncertain about the definition of manometric head for a pump.
A large number believed it to be the theoretical head available with ideal flow in the
pump, instead of simply the head measured by a manometer connected between the
inlet and outlet of the pump (or the gain in piezometric head across the pump). Most
candidates had less trouble with the definitions for the efficiencies.
Q4
14.1 % of candidates attempted this question.
(a) Many candidates were able to develop the expression given for the stream function.
(b) Fewer candidates were able to generate the expressions given in this part of the
question. Development of the expression in (i) required use of the condition that the
velocity at the stagnation point is zero. This was not always recognised by candidates.
(c) This part of the question was attempted by very few candidates. In (i), most
candidates did not confirm (by substitution of a = 78.1mm ) the separation of the
source/sink pair but simply assumed the value given was correct and used it to find the
source/sink strength required. No candidates succeeded in obtaining the maximum
velocity asked for in part (ii).
(a) Most candidates demonstrated a very poor understanding of how to set up the
differential equation for the flow of oil under equilibrium conditions. Indeed, many of the
‘solutions’ were artificially fabricated to get the expression given, even though the
development used was inconsistent and incorrect. In many cases, the chosen direction
for shear stress acting on a fluid element was wrong.
(b) Here, the majority of candidates integrated the equation accurately but then many
used the incorrect boundary condition u = 0 at y = h instead of the correct one, 8 = 0 at y
= h.
(c) The solution for part (i) required a further integration to obtain the volumetric flow rate
per unit width of the plate. However, there were many instances of careless errors both
in the integration process and in the subsequent numerical calculations. Only a few
candidates completed the calculations successfully. In part (ii), a significant number of
candidates did not appreciate that the maximum velocity of the oil occurs at the free
surface of the oil, corresponding to 8 = 0. Instead, candidates incorrectly assumed
without proof that the maximum velocity occurred at y = h/2. Though incorrect, this was
consistent with the earlier incorrect boundary condition of u = 0 at y = h made by some
candidates.
Q6
61.5 % of candidates attempted this question.
(a) Very few candidates appeared to understand the problem of measuring stagnation
pressure with a pitot tube in supersonic flow. No one appeared to recognised that the
pitot tube brings the velocity of the flow to zero in the vicinity of the tube, necessitating a
transition from supersonic to subsonic flow with a corresponding loss in stagnation
pressure through the resulting shock front. This pressure loss has to be accounted for
separately and added to the measured stagnation pressure at the pitot tube to arrive at
an accurate value for the stagnation pressure in the supersonic flow.
(b,c) Most candidates were able to accurately calculate the values required in parts (b)
and (c) using isentropic flow conditions. A few made calculation errors which resulted in
an incorrect initial supersonic Mach number. However, they proceeded with isentropic
calculations, unaware of the problems referred to in part (a). There were many
examples of careless calculation errors, such as using a calculated result for M2 as a
value for M and using temperatures in °C rather than absolute temperature K in
calculations. Also, it was clear that several candidates were unfamiliar with
compressible flow with instances of assuming constant density (sometimes using the
value for water), and using the incompressible Bernoulli equation to calculate velocity.
Q7
45.6 % of candidates attempted this question.
(a) Most candidates were able to correctly derive the expression for the equivalent pipe
loss coefficient of a pair of parallel pipes.
(b) The majority of candidates seemed familiar with the Hardy Cross network analysis
method required in part (i). However, several incorrectly treated the two loops of the
network as a single loop for the analysis. Also, a minority of candidates introduced
careless numerical errors, particularly when evaluating the correction flow term, causing
their solution to diverge from rather than converge to the correct solution. A significant
number of candidates did not attempt part (ii) of the question. Of those that offered a
solution, many did not take account of the elevation data provided while others used it
incorrectly. Very few candidates succeeded in obtaining the correct pressure head at
node A.
(ii) 12.06 m
Q8
41.8 % of candidates attempted this question.
(a) Although a number of candidates were able to correctly calculate a system demand
curve, many attempts were frustrated by careless errors, such as using A instead of A2
when converting from velocity to volumetric flow in the energy equation. A few
candidates clearly did not understand how to generate a system curve from the
information given.
(b) There were no candidates able to calculate a revised best efficiency point for the
pump operating at a new speed while simultaneously matching the system demand
curve. This calculation is best achieved by application of the flow coefficient and head
coefficient expressions between the best efficiency point for the given pump conditions
and the new operating point at the different speed.
(c) Most candidates appear to have little understanding of the net positive suction head
and there were no correct calculations presented. This is an important practical concept
for pump installations and is one which candidates need to understand and be able to
determine accurately.
Q9
52.6 % of candidates attempted this question.
Most candidates who attempted this question were able to complete it satisfactorily.
However, some had difficulty setting-up the differential equation for the flow. Also, the
presence of a negative sign in the constitutive equation for the liquid caused difficulties
for many.
(a) This was answered well and there were many correct derivations. However, the
negative pressure gradient was not always handled well. Some candidates appeared
not to realise that it simply signifies a pressure drop in the flow direction and therefore
the negative gradient yields a positive numerical value.
(b) This part of the question was generally done well, with most candidates deriving the
correct expression for the volumetric flow rate, apart from a few attempts blighted by
careless errors of integration.
(c) In this numerical part of the question, many candidates made careless calculation
errors which lost them some marks. Typical errors included units, such as using 20 for
the pressure gradient instead of 20 x 103 N/m3 , transcribing 2/3 as 3/2 and forgetting to
evaluate the index term in an expression. Candidates should be encouraged to pay
more attention to the accuracy of their calculations by checking their own work.
General Comments:
The year 2003 was the second examination in this subject, although the syllabus is
similar to relevant parts of Paper 203 Fluid Mechanics in earlier years. Compared with
2002, there was an improvement in the overall performance of candidates in 2003, and
Q1
In part (a) the derivation from first principles was attempted by very few candidates.
However, many candidates did successfully derive the result in part (b) by applying the
momentum principle, as was envisaged in the question by the expression 'or otherwise'.
Q2
This descriptive question was not well answered. Equipotentials were not always drawn
at right angles to streamlines, to produce a flow net comprising a grid of curvilinear
squares. For the real flow case, few candidates indicated the required separation of the
flow after the convex point at the inside of the bend, nor explained how the resulting
constriction and subsequent expansion of the flow causes turbulence and energy losses
downstream of the bend.
Q3
Many candidates were familiar with the S curve technique required to produce the 1 hour
unit hydrograph, and were able to perform the convolution required to produce the
answer of 49 m3/s for the peak runoff. Some errors occurred through failing to apply
correct lag times in the calculation.
Q4
The derivation in part (a) was generally well done, but the iterative calculation required in
part (b)(i) was less well handled, as was the application in part (b)(ii). The numerical
solution for the settling velocity in part (b)(i) is approximately 0.023 m/s, and in part (b)(ii)
the proportion retained is approximately 76%.
Q5
Some candidates sketched boundary layer profiles along a plate, rather than the
required velocity profiles across a circular pipe. Some knew that the Prandtl 1/7th law
has limitations at the pipe wall and the pipe centreline, but did not include details of how
the velocity gradient values are incorrect, which would have improved their answers.
The result derived in part (b)(ii) may be used to show that the value of y/a required in
part (b)(iii) is approximately 0.24.
Q6
Some derivations of the correction term in part (a) were not sufficiently clear about the
sign convention used for flows into and away from a node. For part (b), an appropriate
initial assumption for the static head at J is necessary to avoid excessive iteration. A
suggested value is the average of the four reservoir levels, which is approximately 41 m
above datum. From this starting point, after two corrections have been applied, the
resulting flows are approximately 1.54 m3/s, 1.23 m3/s, 0.48 m3/s and 2.23 m3/s in
pipelines AJ, BJ, JC and JD respectively. Other starting values were equally acceptable,
but may require more iterations to produce answers within the tolerance specified. It is
assumed that reservoir surface areas are sufficiently large so that water levels do not
vary significantly, and also that velocity heads are negligible in relation to the specified
tolerance. Some candidates also rightly pointed out in part (c) that the use of constant
Q7
Many candidates correctly plotted pipeline and pump characteristic curves for part (a) to
find a flow rate around 0.16 m3/s and approximately 80 kW power used by the pump. A
few errors resulted from incorrect use or omission of the pump efficiency value. The
dimensional analysis asked for in part (b)(i) was not particularly well done. In some
cases only a dimensional check was performed to show that the terms quoted are
indeed dimensionless, which gained some credit but not the full marks available. To
calculate correctly the values of approximately 0.10 m3/s and 38 kW in part (b)(ii) it was
necessary to use the dimensionless groups to adjust the pump characteristic curve, and
then find where the adjusted curve crosses the pipeline system curve previously
produced in part (a). It is not correct to apply the dimensionless groups to adjust the
solutions from part (a), although in fact the numerical values obtained by that method are
quite similar to the correct answers.
Q8
Part (a) was generally well done, with the terms correctly identified and defined. In part
(b), the value of Manning's n may be calculated as approximately 0.0112, and then used
in the subsequent two step calculation to give the distance upstream as approximately
475 m. Some very poor sketches of the situation in part (b) were noted, and it is
suggested that an ability to sketch the appropriate water surface profile must help in
performing the calculation correctly.
Q9
As an alternative to the specified graphical plotting method, a number of candidates
applied equivalent statistical methods to the data, and due credit was allowed for this.
Using a graph of the data, the amount of scatter and the extent of extrapolation will be
apparent, and these relate to the comments required. The numerical solutions are
approximately 260 m3/s, and a return period of 130 years. It was noticeable that although
candidates were able to calculate the value of the Gumbel reduced variate y for a given
value of T, some were unable to perform this calculation in reverse, and deduce T for a
given value of y.
General Comments:
Q1
This two-part question was not popular. Part (a) dealt with volumetric throughput-limiting
phenomena in packed and plate columns. These always represent a design constraint.
The phenomena are fully-described in standard text-books.
Q2
In this question on binary distillation the substitution of data into the Fenske and
Underwood equations to obtain the number of theoretical plates at total reflux (N+1)m
and the minimum reflux ratio Rm, followed by the use of the Gilliland plot to estimate the
number of theoretical stages N required for a specific degree of separation, was
generally well-understood. However application of this procedure during preliminary
design to relate N to practical R values for various distillate and bottom compositions,
and hence optimise the design, was not well-explained.
In the calculation the conversion of N into the number of actual sieve plates by use of an
estimated plate efficiency was not considered.
Q3
There were some good attempts at parts (a) and (b) of this question on physical gas
absorption with a dilute solution and a straight line equilibrium relationship between gas
and liquid concentrations.
Part (c) was rarely dealt with. H.T.Uog will, because of axial mixing,vary with column
diameter. The overall gas mass transfer coefficient Kg will depend upon wetted area i.e.
wetting performance, system properties, gas turbulence, liquid distribution efficiency etc.
Q4
This was a straightforward example of the representation of multicomponent vapour-
liquid phase equilibria based upon equilibrium (K=y/x) data.
With regard to part (b), total condensation of a multicomponent mixture is only complete
at its bubble point temperature. Hence an approximate bubble point, not a dew point,
estimation was required.
Part (c) required an understanding that operation at a higher pressure would result in a
higher condensing temperature range, i.e. dew point to bubble point temperatures.
Hence if the same coolant was used a higher temperature driving force would result, so
that a smaller condenser would be required; alternatively a higher temperature, possibly
more-economical, coolant would suffice. The increase in pressure would also result in an
increase in vapour density; since the vapour flow-rate determines the pressure drop,
smaller upstream equipment, piping, condenser etc. would be suitable. However more
robust, and hence expensive, construction would be needed.
Q5
There were few attempts at this standard calculation for a plate and frame filter press.
In part (b) there were inadequate summaries of the different ways in which it can be
operated, i.e. filtration at constant pressure, filtration at constant rate, with or without filter
aids, and with different washing or air-blowing techniques.
In (c) the advantages and disadvantages of a plate and frame press compared with
other batch-operated equipment, e.g. bed filters or leaf filters,’ were not well-
summarised.
Part (a) required a discussion of the significance of each term and how they may be
maximised. Part (b) sought an explanation of means for their optimisation in practice,
e.g. to obtain a balance between K and (CC)m , or between K and A, by using various
plate designs, agitation and baffling, various packing designs and configurations etc.
Q7
There were some reasonable attempts at both (a) the derivation of the basic equation for
simple (differential) batch distillation of a binary mixture, and (b) evaluation of the integral
for the specific case given to find the quantity of distillate produced, D=F-W. The
composition of this distillate was then found by a material balance for the more volatile
component (carbon disulphide).
The answer to (b) ii, approximately 69 mole % carbon disulphide, indicated that only
moderate enrichment would be obtained by simple distillation. To obtain a distillate richer
in carbon disulphide would require the addition of a packed or plate rectification column
plus a reflux splitter. Operation could be either at constant reflux ratio to yield a variable
distillate composition or at variable reflux ratio to yield a constant distillate composition.
Q8
Leaching is a complex separation process because e.g. the particle/flake size of the
solid, the distribution of solute within it, and the mechanisms and rate of internal mass
transfer are unique to each process.
Part (a) of this question required a block diagram of all stages involved in a process for
leaching a valuable oil from vegetable seeds. Candidates tended to ignore preparation of
the solid (e.g. cleaning, decorticating/shelling, conditioning, flaking or grinding) and post-
extraction solids treatment (e.g. recovery, and possibly washing and drying.
The main factors affecting the leaching process, i.e. (b), and practical considerations, i.e.
(d), are readily-deducible from general knowledge on leaching provided in basic text-
books.
Q9
Part (a) of this question required a standard graphical solution for a stagewise
countercurrent liquid-liquid extraction operation with a partially miscible system. The
answer to (b) was then obtained by a mass balance.
General Comments:
Only one candidate took this examination. It seems invidious to comment on the
performance of this candidate. This report therefore contains just the numerical answers
to the questions.
Q2
[Answers: (b) (i) Q = 0.206 (ii) Q = 0.286]
Q4
[Answers: (a) H f = -162 kJ mol-1 (b) T = 2315 K (c) CH = 42 kJ mol-1]
Q5
[Answer: (b) 1 = 0.491]
Q6
[Answer: (b) k2 = 0.0102 dm3 mol-1 min-1]
Q7
[Answer: (b) (iii) Vm = 34.4cm3 (iv) 150 m2 g-1]
Q8
[Answers: (a) = 1012 s (b) (ii) 652 s
Q9
[Answers: (a) V = 2.203 m3, = 881.1 s, [B] = 21.28 mol m-3, [C] = 10.64 mol m-3]
General Comments:
In total 3 candidates took the examination. With such a small sample it is not possible
to draw any strong conclusions about the responses to questions. However a
commentary on each question is given to aid future candidates of the paper.
Q1
This question attempted to test the candidate’s awareness of the significance in lighting
design of the “laws of perception” as defined by the “Gestalt” school. A good answer
would include diagrams and examples to support the student’s demonstration of their
understanding of the laws and their application in lighting design.
Q2
This question requires knowledge and understanding of the procedure to predict
interstitial condensation. Part (c) of this question required an understanding of the
design temperatures used for condensation prediction and the contrast with those for
heat loss calculation. The issue being that condensation for a period of one or two days
would not necessarily cause problems and hence the external designs temperature is
higher than that for heat loss calculations.
Q3
Candidates were required to define and describe thermal comfort criteria (e.g. activity,
clothing, and humidity) and discuss the European and North American indices. The
question also required a calculation of predicted mean vote (PMV)
Q4
This question was set to examine the candidate’s awareness of the factors that are
important in defining the building services requirements for a given building. The
building was a city centre office development. A range of solutions is expected. Good
answers would recognise the needs of the various parties that will contribute to the brief.
How these needs would be translated into the design of the building and its systems.
Examples:
Low maintenance requirements – Selection of plant, instructions and details, systems
operation, location of plant for access and safety implications, ability to isolate systems,
cost of maintenance, and disruption of maintenance activities to the operation of the
building.
Users – Comfort, air conditioning, fresh air, opening windows, ability to work out of
normal hours, acoustics, lighting flexibility, and instructions to users on how the building
systems operate and can be adjusted.
Low energy – System types and main plant selection, building fabric, possible
compromises with requirements for cooling, other aspects of sustainability (materials,
transport of materials, water usage).
Good answers would include system and building design choices to illustrate the various
issues
Q5
This question examines the candidate’s knowledge of ventilation for airborne
contamination. The question included a section requiring descriptive work and
calculation of air supply rate for reduction of concentration levels of pollutants
Q7
This question required a response that demonstrated knowledge of the implications of
the physical shape of rooms and their effect on sound quality. In particular for concert
halls the issues relating to production of music from the stage. Descriptions of focus,
long delayed reflections and flutter echoes would be expected in the answers. The
second part of the question required knowledge of the Sabine equation in calculation the
reverberation time of a room.
Q8
The object of the question was to test the candidates’ ability to calculate the summertime
peak internal dry resultant temperature, in a non- air-conditioned office, using the
thermal admittance procedure as detailed in CIBSE guide A 1999. The question also
asked for method of reducing the air-conditioning load in the room without the use of air
conditioning.
Q1
This was a question many attempted and marks were often good. Most treated part (b)
as a crystallography question, showing little sign that they understood that this was a
highly simplified model for the packing of the aggregate in concrete. A large number of
examinees said (and claimed to prove) that there were 3 octahedral interstices per unit
cell rather than 4.
Q2
This was attempted well on the whole but, although there was a tendency to rely on a
diagram of toughness vs. specimen thickness, with far too few (if any) explanations and
descriptions in words when answering part (a). The numerical calculations were done
satisfactorily.
Q3
Part (a) was answered very badly by most candidates. Very brief descriptions of the
three processes were given with very little indication that the notion of a rate-controlling
process was understood or how the various diffusion processes filled that role.
Fortunately part (b) was answered quite well. It was also obvious that many candidates
did not know the difference between full annealling and process annealling.
Q4
Parts (a) and (c) were not well answered. Few showed any appreciation that many
composites have a greater toughness/crack resistance than the matrix and reinforcing
materials alone. As in Q3 the numerical part (b) of the question saved the day.
Q6
Few candidates appreciated the significance of the golf club head being required to be
hollow, and fewer knew what a glass ceramic was. Parts (c) and (d) were better done.
Q7
This was another question in which the numerical part was much better attempted than
the descriptive parts. The idea of residual stress is a difficult one for students and none
seemed to understand that the components of such a stress system must average to
zero over the volume of the system or to appreciate the problem of explanation this
poses for their effect in fatigue. Very few mentioned Goodman diagrams or the Gerber
analysis.
Q8
Parts (a) and (b) were well attempted but in parts (c) and (d) the position of the various
components in the galvanic series is only part of the story and the students did not
understand this well. However, the question was worded in such a way as to discourage
any attempt to consider things such as stress corrosion cracking in part (c) for example
and perhaps this is a comment on the question as much as the performance of the
candidates.
Q1
The answer requires an understanding of stress transformations and the Mohr’s Circle
graphical representation of 2-D stress. The question is in two parts.
Part (a) asks for sketches of Mohr’s Circle for three cases of given principal stresses.
The state of stress at a point in a 2-D stress system may be represented in the Mohr’s
circle plane relative to given reference axes, by plotting direct stress along the horizontal
axis and shear stress along the vertical axis. The maximum and minimum principal
stresses act in directions in which there is no shear stress and Mohr’s Circle passes
through these points represented on the horizontal axis. These principal stresses are
perpendicular in the physical plane. Angles in the Mohr’s Circle plane become twice
those in the physical plane. The values of direct and shear stress on a plane at 45o to
the principal stress directions are given from the Mohr’s Circle by the value of direct
stress and shear stress on a radius at 90o to the direct stress axis.
Answers: (i) = 0, = 100 N/mm2 (ii) = 50 N/mm2, = 50 N/mm2T
(iii) (Mohr’s Circle reduces to a point) = 100 N/mm2, = 0
There were reminder clues in the question that principal stresses act in perpendicular
directions and the maximum shear stress planes are at 45o to these directions.
Only few candidates achieved completely correct answers to this question. Generally
candidates failed to appreciate how Mohr’s Circle describes a 2-D state of stress with
respect to a reference direction, in particular that angles in the Mohr’s Circle plane are
twice those in the physical plane. They were also unsure that the stresses had to be
expressed relative to the same axis system before they could be summed.
Q2
(a) It is necessary firstly to determine the stress cycle, using the bending stress equation.
This is +/- 170 N/mm2 .This stress is located on the given S-N curve, and occurs at logN
= 4.7. This latter value is best determined using similar triangles, but can also be found
by accurate drawing and scaling. It follows that the number of cycles to failure is approx.
50,000.
(b) The stress cycle is now 170N/mm2 to zero. These values are plotted on a Goodman
diagram, at the appropriate mean stress of 85 N/mm2. Again by similar triangles or
accurate drawing the stress range at zero mean can be estimated as 108 N/mm2
Since this is below the endurance limit of the S-N curve, the component will have an
infinite life.
It would appear that candidates were not prepared for a numerical fatigue question.
Q3
From the given data, the torque being transmitted can be found (382 Nm).
The required diameter of a solid shaft can be found from the shear stress-torque
equation and is 30 mm.
Similarly the bore of a hollow shaft of 50mm O.D. can be found to be 47.36 mm.
The weight of the two possible shafts is proportional to their cross sectional area, and so
the hollow shaft is much lighter.
Returning to the standard torque equation allows the angle of twist of the hollow shaft to
be determined as 0.0128 rad. or 0.73 degrees.
A number of candidates incorrectly assumed that the solid shaft was of 50 mm diameter,
and so conflicted with the specified working stress.
Some candidates tried to use thin walled tube theory, which was inappropriate for the
solid shaft, and unnecessary for the hollow one.
Answers: With the 1 kN load applied at the end D, the bending moment at A and D is
zero,
at B = 67 Nm and at C = 100 Nm. The slope of the bar is 1.9o from the horizontal.
There were very few completely correct answers to this question. The main cause of
error in the solution was trying to set-up a differential equation for the deflection of a rigid
beam, with boundary conditions for the compatibility of displacement based on the
rigidity of the beam! Other errors arose from uncertainty of the meaning of ‘spring
stiffness’ (the units of this quantity are helpful here) and not applying the correct
characteristics for a hinge.
Q5
The solution to this question requires the setting-up of the bending moment equation, in
algebraic terms, for a cantilever beam with two different load conditions of the same
overall magnitude. Macaulay’s method is convenient to represent the point or distributed
load/ moment applications along the length. The bending moment equation is then
integrated twice to determine the slope and deflection. The force and moment reactions
at the built-in end have to be accommodated and the displacement characteristics of the
support enable the constants of integration to be determined.
Answer: The part span distributed load gives the larger end deflection of 82PL3/48EI
compared with 81PL3/48EI due to the point load.
The part span distributed load caused problems for some candidates who tried to
substitute a single concentrated load and gave, in full, the same derivation for the end
deflection twice! Some candidates had difficulty with the requirement to determine the
displacement of the ‘overhang’ section rather than at the point of the load application.
The use of Macaulay’s method enables the initial bending moment equation to be set-up
for the complete beam such that after integration the slope/deflection may be determined
at any location from the corresponding equation.
It is worth recommending here that candidates consider the dimensions of the algebraic
quantities they are dealing with in an equation as an aid to confirming the accuracy of
their work. For example, a moment has the dimensions of FORCE x LENGTH, so each
term in a bending moment equation should have these dimensions. Answers were seen
in which a force was added to a moment, or a moment might be added to a term with
dimensions FORCE x LENGTH3; equations in such circumstances can’t be valid and
such a check could reveal the source of an earlier error.
Q6
The ‘exact’ stress values should be calculated using Lame’s equations. The maximum
value is the hoop stress on the inner surface and is 91.1 N/mm2.
Some candidates had forgotten the equation for hoop stress in a thin cylinder (PD/2t)
and others compared the Lame values for the inner and outer surfaces, instead of
comparing Lame with thin cylinder theory.
Q7
The first step is to establish the properties (centroid and second moments of area) of the
given asymmetric section. The neutral axis for a section is known to pass through its
centroid, the position of which is determined by taking first moments of area about a
convenient axis system. Choosing the reference axes along the top edge (x) and down
the left edge (y), the centroid is found to be 12.14 mm to the right and 37.14 mm
downwards. For the calculation of second moments of area about x,y axes with origin at
the centroid it is convenient to consider the section as a combination of rectangles, for
which the second moment of area is remembered, and use the parallel axis theorem to
transfer the value from local centroidal axes to the section centroidal axes.
Answers: Ixx = 14.15 x 105 mm4, Iyy = 2.40 x 105 mm4 and Ixy = 3.21 x 105 mm4.
For the case of principal axes, it is required to find the orientation of the axes giving Ixy =
0. The axis rotation from the reference may be found from Mohr’s Circle or the
transformation equation: tan 2U = 2Ixy/(Ixx - Iyy), giving U = 14.34o.
The given moment loading (M) is thus shown to be about a principal axis and so for the
second part of the question the stress may be found using simple bending theory, V =
My/I with the second moment of area (I) and distance from the neutral axis (y) being
expressed with respect to the principal axes. The required second moment of area is
determined from the Mohr’s Circle or the transformation equation:
I = (Ixx + Iyy)/2 + [(Ixx - Iyy)2 + 4Ixy2]½/2 = 14.97 x 105 mm4.
For point A, y = 63.91 mm. Hence the required stress, V = 85.5 N/mm2.
The determination of the section properties was generally well tackled, although there
were errors made in the dimensions of the rectangles into which the section was divided
and particularly in the distances from the local centroid. With the section constants
determined, the given load was shown to act parallel to the principal axis. A number of
candidates struggled to find the stress at point A by not understanding that the load
produces a bending moment also with respect to the principal axes. Having found the
principal axis directions it is easier to transform the second moment of area and distance
to the principal axes rather than resolve the bending moment about the originally chosen
reference axis directions.
Q8
(a) A plastic hinge will form at the point of maximum bending moment, i.e. at mid-span
where the moment is 250P Nmm. (P the point load in N) This must be equal to the
moment of the stresses across the section, which are uniformly -400 N/mm2 on the top
half of the beam and +400 N/mm2 on the bottom half. These stresses produce a moment
of 5 MNmm and hence P= 20 kN.
Many candidates clearly did not understand the term ‘plastic hinge’ and calculated the
load to produce initial yield. Determination of the plastic zone was not attempted,
although it is only a point at initial yield.
Q9
The pre-processor creates a mesh and determines the coordinates of nodes. Input
required includes the model geometry, material and loading, and an appropriate element
type must be chosen.
The processor calculates nodal displacements and hence strains and stresses. There is
no input from or output to the user.
The post processor makes the output user-friendly and can provide principal stresses
and their directions at specified points numerically, graphically, as contours or coloured
regions. The user chooses the required output format.
Candidates appear not to have had hands on experience of FE programs, and merely
stated the obvious, i.e. that the pre-processor came before the processor and the post-
processor after it.
Q10
Part (a) and (b) are standard bookwork.
Part (c) The spring force changes with displacement, so an expression for force, and
then stress in the cylinder walls, needs to be formulated in terms of displacement. This in
turn leads to an expression for creep rate. This expression can be manipulated to give
time as an integral of a function of displacement. The required time can be evaluated as
6.56 minutes.
An easier, but less exact solution can be obtained by assuming a constant mean creep
rate.
Of the few who attempted the numerical part of this question, most fell at the first hurdle,
and mis-calculated the stress in the cylinder. Despite the question stating that the
cylinder ends were rigid, some used the stress here rather than in the cylinder walls as
the important parameter.
General Comments:
The pass rate was encouraging, 61%, which is approximately the same as last year.
However no candidate obtained particularly high marks, the highest being 65%. Some
candidates had obviously not prepared for the examination and achieved very low
marks; working through past papers is essential if a serious attempt is to be made. Also
some candidates had not read the rubric sufficiently carefully since they attempted more
than the maximum allowable number of questions in a particular section; in these cases
the question with the lowest mark in the section containing the surplus question was
ignored.
Clearly matrix methods and the finite element method are not being studied at the
various centres since hardly any candidate attempted the questions involving these
topics. However there was an increase in the number of candidates attempting questions
on stability, an encouraging trend.
Overall, candidates should prepare more thoroughly for the examination by familiarising
themselves with the fundamentals of the topics covered by the syllabus and, as stated
previously, work through past papers. There is, in fact, a wide choice, five questions from
twelve, so that candidates have every chance of success if they are adequately
prepared.
Section A
Q1
(a) Very few of the candidates who attempted this question appeared to be familiar with
the unit load method so that this part of the question was poorly answered with only one
completely correct solution. The method is to release the arch horizontally at one of the
supports and then use the condition of compatibility of displacement where the
horizontal displacement of the released support is equal to its displacement under the
horizontal support reaction; the unit load method gives both of these displacements.
(b) The expression for H derived in part (a) is used directly and M0, y and ds are
expressed in terms of an angular coordinate system, the limits of integration being 0 and
whence H=21.2kN. From symmetry the vertical reactions are each 50.0kN.
Q2
(a) Candidates appear to be now more familiar with the slope-deflection method since
more attempted this question than in the past.
Initially the fixed end moments (FEMs) are calculated using the data attached to the
question paper. Similarly the slope-deflection equations may be deduced from the data
sheet for the stability function F. Then, from the equilibrium of moments at the joints,
MBA+MBC=0, MCB+MCD+FEMCE=0, MDC=0 and from the horizontal equilibrium of the
frame, SAB+SDC+10=0. The eight slope-deflection equations then reduce to four
simultaneous equations the solution of which gives the angles of rotation at B, C and D
and the horizontal displacement at B (and C). Substitution of these values in the initial
slope-deflection equations gives
Q3
A very small number of candidates attempted this question so that it would appear that
matrix methods, as in previous years, are not widely studied. Of the candidates who did
make an attempt not one managed a correct solution and only one managed to produce
a nearly correct solution.
(a) The uniformly distributed load is equivalent to concentrated loads of wL/4 at nodes 2
and 6 together with a concentrated load of wL/2 at node 4. The stiffness matrix for each
of the members may be written down using the data sheet for the stability function F;
each of these matrices is a 6x6 symmetrical matrix.
Q4
A very badly answered question with not one correct or nearly correct solution.
(a) The frame has a statical indeterminacy of 2, say the horizontal and vertical reactions
at the pinned support D.
(b) The frame may be released at D and the horizontal and vertical components of the
displacement at D due to the external loading system calculated using, say, the unit load
method. Thus, in general
= (M 0 M 1 / EI )dz
where M0 is the bending moment in a member due to the applied loads and M1 is the
bending moment in a member due to a unit load applied at the point and in the direction
of the required displacement. Having obtained the horizontal and vertical components of
displacement two equations of compatibility are set up in terms of the unknown support
reactions and the flexibility coefficients of the frame. Thus
CV+a11R1+a12R2=0
CH+a21R1+a22R2=0
where R1 and R2 are the vertical and horizontal components of the reaction at D and
a11 = ( 2
)
M 1 / EI dz etc.
Solving gives
R1=11.8kN, R2=-3.3kN.
Q6
Clearly candidates have no knowledge of the concept of the counterbracing of trusses
subjected to dead and live (moving) loads for the very few who attempted this question
did not, without exception, use the correct approach.
The vertical component of the axial force in a diagonal member in a panel of a truss
resists the shear force in that particular panel and the diagonal will be in tension or
compression depending on the sign of the shear force. Since it is undesirable for the
long diagonal to be in compression the panels, where this is a possibility, require
counterbracing.
Initially a family of influence lines for the shear force in each of the nine panels of the
truss is drawn. This is most easily done be setting up vertical ordinates equal to + 1.0 at
the left-hand end and -1.0 at the right-hand end. The head of the live load is then
positioned at the point of zero shear force in panel 4 (panel 5 is already counterbraced)
and the sign of the shear force calculated. In this case the value is -0.38kN, is negative,
and therefore panel 4 (and panel 6 from symmetry) require counterbracing. Next, the
head of the load is positioned at the point of zero shear force in panel 3; the value of the
shear force is +1.7kN, is positive, and therefore panel 3 (and panel 7) does not need
counterbracing. From this result it is deduced that none of the remaining panels require
counterbracing.
Section B
Q7
A large proportion of candidates attempted this question and the majority of these
produced correct, or nearly correct, solutions.
The work absorbed by the yield lines is M(10.3+7.5/x)kNm and the work done by the
external load is 195-l0x kNm. Equating these expressions and differentiating the
resulting equation for M with respect to x and equating to zero gives x=3.11m and
M=12.9kNm/m.
Q8
The three possible independent collapse mechanisms are: gable with plastic hinges
forming at B and C; left-hand rafter with plastic hinges forming under the load and at B;
right-hand rafter with plastic hinges forming under the load and at B and C. Combining
the gable and left-hand rafter mechanisms (with a hinge cancellation at B) gives
P=106.3kN while combining the gable and right-hand rafter mechanisms (with a hinge
cancellation at B) gives P=137.5kN. The minimum critical value of P is therefore 106.3kN
and the collapse mechanism has hinges at the mid-point of AB and at C. The
corresponding support reactions are, vertically: 89.6kN at A, 123.0kN at D and
Section C
Q9
Very few candidates attempted this question some of whom treated the triangular
element as a truss; clearly the finite element method is not being studied by candidates.
terms of the nodal displacements. The [B] matrix follows from which [B]T[D][B] is
obtained. Finally the stiffness matrix for the element is [K]=[B]T[D][B]x4x1.
Q10
(a) Some of the candidates who attempted this question did not know the governing
differential equation of plate behaviour and therefore could not demonstrate that the
equation was satisfied by the deflection function.
(b) The determination of the bending moment distributions along the edges of the plate
is a straightforward matter of partially differentiating the deflection function twice with
respect to x and y in turn and substituting in the standard expressions for Mx and My.
Thus, for the edges x=0 and x=a, Mx=-q(y2-by)/4, My=-qv(y2-by)/4 respectively and for
the edges y=0 and y=b, Mx=-qv(x2-ax)/4, My=-q(x2-ax)/4 respectively.
Section D
Q11
More candidates attempted this type of question this year than in the past and there
were some creditable attempts at a solution.
The bending moment at any section of the beam, a distance z from one end is given by
M = Pv+wLz/2-wz2/2
The boundary conditions are v=0 at z=0 and z=L which give A and B and hence the
general expression for v. The maximum displacement occurs at z=L/2 and the maximum
bending moment follows.
Q12
Very few candidates attempted this question and not one tried to obtain a solution using
the specified energy approach.
General Comments:
The pass rate for the 2003 paper was 42.5%, being very similar to the pass rate in
previous years. Also in general terms, many candidates do not show evidence of having
prepared for this examination, or indeed, what is the contents of the syllabus.
Q2
A steel truss, brought up nothing of note amongst the few answers received to this
question.
Q3
A steel plate girder, was attempted by many candidates who tried to treat the welded
section as a rolled section. Clearly, their knowledge of welded steel girders was minimal,
which was reflected in the marks.
Q4
This question was in 2 parts. The first part was a timber joist floor. This was well
answered, and it is obvious that timber design is part of their preparation regime for
many candidates. The second part, involving timber column sections, had a twist to the
question in that no size for the section was given. Many candidates tried algebra to find
section sizes. This fell down at the K12 factor table, which is numerical, with no formula
in the code itself. For future reference, should this arise again, it is advisable to guess a
size for the timber column and check the capacity against the axial loads and moments.
Q5
This question was the masonry check on a proposed office extension. This was well
answered, and no useful comment can be gained from this question.
Q6
Q6 involving laterally loaded masonry wall panels, well very poorly answered. This is a
mystery, as the actual calculations and mathematics are very staightforward. Again, lack
of preparation and/or experience is the likely explanation.
Q8
This question involved a reinforced concrete footing subjected to bending, axial and
shear forces. The Candidates were asked for a complete design and the solution was
acknowledged, in the marking, as being rigorous and longer than other questions.
However, the response from the candidates was encouraging.
Q9
This was a standard soil mechanics and reinforced concrete design of a cantilever
retaining wall. The candidates who answered this question in some detail obtained good
marks.
Finally, the message seems to be getting through that to simply write down what would
have done if one had time accrues no marks seems to be getting through. Only a few
candidates resorted to this technique.
General Comments:
No candidate attempted Q10. The attempts at Q3, Q6 & Q12 were generally poor. These
observations give the examiners some concern. They suggest that, as in previous years,
those preparing candidates are not covering the full extent of the syllabus. While the
syllabus for this paper is very broad, that reflects the broad ranging knowledge required
by professional engineers working in the marine industries. It is important that the
candidates for this paper should be able to demonstrate a familiarity with the basic
concepts of both the Naval Architecture and the Marine Engineering aspects of this
subject.
Q2
This is a straightforward question to determine the principal dimensions of a new ship
given details of a basis ship and a set of Owner's Requirements. From the deadweight
the displacement can be estimated. Knowing displacement and speed, the length can be
determined and with length and speed the CB can be estimated. With displacement,
length and CB known, breadth and draught are unknown but related by breadth/draught
ratio and so can be determined. Finally depth can be derived by adding the required
freeboard, based on the fundamentals of the Load line Rules, to the calculated draught.
Constants for any of the standard empirical relationships between these parameters can
be derived as required from the data given for the basis ship. Satisfactory dimensions
would be L = 128.9 m, CB = 0.704, B = 16.7 m, T = 6.95 m Freeboard 1.65 m and D =
8.6 m (minimum).
(b) The impact on an AC drive motor with synchro-converter is likely to be less because
it is purely a rotating machine but in addition the motor does not have the same
tendency to change rotational speed in response to varying load because the speed is
determined by the frequency supplied to the motor. Electrically there may be effects
when the motor tries to maintain constant speed under increasing load.
Q4
It should be borne in mind that the aim of this question was to address the effect of
increasing engine operating temperature on the products of combustion and not the
characteristics of lubricating oils. The answer should therefore consider the effect of
increased cycle temperature on fuel consumption, engine efficiency and the chemistry of
(b) The wave energy spectrum should be transformed into an encounter spectrum using
the standard relations for Ze and S(Ze). The heave response spectrum SH(Ze) is then
obtained by multiplying ordinates of S(Ze) by the RAO squared. If the ordinates of the
heave response spectrum are then integrated, the resulting area m0 allows the
calculation of the significant heave amplitude - 2 x the square root of m0
(c) For the given data the significant heave amplitude is 0.41 m
Q6
(a) The major part of the explanation is given in the standard textbooks.
(b) A description of the following process is expected in answer to this question. The
characteristics of the short term bending moment distribution are dependent on the sea
conditions the ship has experienced. Given data from different sea areas then a long
term distribution of bending moment can be derived depending on the length of time the
ship is expected to spend in each as part of its trading pattern. From these values the
distribution of working stresses in individual parts of the ship structure can be determined
allowing for stress concentrations due to geometric effects and the number of cycles at
particular stress levels calculated. The number of cycles to failure at each of these stress
levels can also be determined by reference to type of structural detail or weld
configuration. Finally a Miner's Rule summation may be done to predict life to failure
such that [(ni/Ni) =1
Q7
(a) (i) The Stodola method requires knowledge of the distribution of both Mass and 2nd
Moment of Area along the length of the vessel - information which is not available in the
early stages of design.
(ii) The Todd or Schlick methods are based on empirical constants obtained from
similar basis ships and can provide reasonable estimates of natural frequencies knowing
only the principal dimensions.
(b) The calculation takes the values of M/I at each station and integrates them twice with
respect to length to determine the deflected shape of the hull girder in profile. The
deflection of the mid point of the hull from the straight line joining the ends is found in
terms of the natural frequency Z. This is compared with the standard profile of a free-
free beam and hence Z can be determined. Z is around 11 rad/sec or 1.75 Hz for this
set of data.
(c) The Stodola method assumes that the deflection curve of the vibrating ship is the
sum of a series of the deflection curves for all modes of vibration. Applying the method
however forces the assumed curve to the form representing the fundamental mode. To
Q8
(a) Typical devices include Transverse Thruster at bow and/or stern; Azimuthing
Propeller of which there are a number of proprietary examples; Vertical Axis Propeller in
a well known proprietary type; Active Rudder; Flapped Rudder.
(b) The major part of the descriptions of the manoeuvres are given in the standard
textbooks.
(c) The angle of heel is given by equating the heeling moment given by the product of
the centripetal force times the lever from half draught to KG to the righting moment for a
small angle of heel given by the product of Displacement times GM times angle of heel.
The answer to the question for the data given is 8.6 degrees.
(b) Aluminium's problems include low stiffness, poor fire resistance, fatigue and high
cost. The low stiffness of the material is not normally a problem in small craft but is
significant in terms of bending deflections and vibration in ships.
(c) (i) Reinforced plastics are often non-magnetic and/or have high strength to weight
ratio. They are very suitable for producing many identical hulls from a single mould.
(ii) Essentially the strength of the material is in the reinforcement. The fibres used are
intrinsically very strong but are also brittle. By surrounding the fibres with a flexible but
low strength plastic a relatively strong and tough material results.
Q10
(a) A full answer would cover four of these six modes - yield, buckling, fatigue cracking,
brittle fracture, plastic collapse & excessive deflection. Yield will occur in structure axially
loaded in tension or on the tension side of structure loaded in bending. Buckling will
occur in structure axially loaded in compression. Fatigue cracking will occur at
discontinuities in the structure, defects or at welds. Mention should be made of the
possibility of some interaction between them – e.g. a crack caused by fatigue over a long
period of time may be the initiator of brittle fracture when a sudden or excessive tensile
load is applied at low temperature.
(b) Some examples are given here for each phase of the life of the ship. The probability
of failure can be reduced during design by ensuring that the correct material is chosen,
that all loads are accurately determined, that the structure has adequate strength to
resist these loads with a realistic safety margin (under all reasonable failure modes) and
that detailing does not introduce discontinuities etc. Care can be taken during
construction to ensure that the correct materials are cut carefully to shape, aligned
(b) This part of the question is seeking illustration from some typical examples setting out
the way ships are developing world-wide:-
(i) Container ships are becoming larger and larger as the shore handling facilities
become more and more sophisticated.
(ii) Fast ferries are developing into a fast freighter market relying again on sophisticated
infrastructure to minimise delay in the entire delivery system.
Tankers appear unlikely to achieve the sizes (1m tonnes dwt) predicted 25 years ago -
navigation and environmental effects such as maximum tank sizes make them less
attractive.
(c) Ultimate limits on size include dry dock/canal dimensions (e.g. Panamax, Suezmax),
depths of navigable water (e.g. English Channel). Speed is limited by wavemaking
considerations for displacement vessels and power/fuel consumption/weight
considerations for semi-planing and planing craft. Cargo working is limited by crane
size/speed.
Q12
(a) (i) The key point for the first part of the question is the objectives of IMO :- to
achieve the highest standards of maritime safety, efficiency of navigation and prevention
of marine pollution.
(ii) IMO organises and administers international conventions in response to the needs of
the world maritime community. It seeks to gain agreement to its proposals in
negotiations at committee level before matters are brought to its assembly or conference
meetings. It has no powers to enforce its conventions - these must be enacted by
national governments and then implemented and enforced by the national agencies.
General Comments:
It was encouraging to see the level of preparation that some candidates had made for
the examination; this was evident in the standard of their answers. The best candidates
provided well structured solutions with clearly annotated diagrams and logically well laid
out calculations.
Throughout both sections of the examination, some candidates’ solutions often showed
lack of planning and this resulted in badly structured solutions, which in some cases lead
to omission of key points or errors in mathematical solutions. Candidates are
encouraged to carefully plan their solutions and to include this with there script where
appropriate. All stages of calculation should be shown in mathematical questions,
annotated where appropriate. It was difficult to read some scripts; candidates are
reminded of the importance of clear handwriting.
In the past candidates have either failed to read the question correctly or understand it,
this was apparent again this year. Some candidates spent a disproportionate amount of
time on parts of questions that attracted few marks. Candidates should take due regard
of the amount of marks for each part of a question and spend an appropriate amount of
time on it.
There was little difference in all questions between UK and overseas candidates. As has
been the case in the past, allowance was made for local practice.
Q1
Few overseas students tackled this question. This perhaps reflects the lack of
awareness and training amongst geotechnical engineers in the importance of
groundwater. The question aimed to determine the candidates knowledge of porosity
and permeability related to rock.
Q2
The importance of the ability of geotechnical engineers to “think in 3 dimensions” is
reinforced by the inclusion of this question. Candidates were required to study the data
provided on the map and drill holes and after drawing strike lines and associate
operations to provide a longitudinal section. The question also challenged the
Q3
Rock mass properties were generally well appreciated and well answered. Candidates
were first asked to describe discontinuity features then to go on to describe
measurements that would be taken and how to analyse stability.
Q4
This question was designed to be attractive to candidates globally. Candidates were
required to describe the effect of weathering on the properties of rocks with which they
were familiar and then to review an appropriate classification system and its relevance to
site investigation and rock mass assessment. Unfortunately it was not well attempted by
many.
Q5
Both UK and overseas students poorly attempted this question. Waste disposal is an
increasingly important area to the geotechnical engineer. This question required the
candidates to contrast the two disposal philosophies and comment the requirements of
them both.
Q6
The first part of this question required candidates to derive an expression for the torque
resistance of a shear vane in a soil with different soil strengths in horizontal and vertical
planes. This derivation is included in many of the standard texts. Many were successful,
however some failed to provide a detailed derivation and marks were lost as a result.
Part (b) required the formulation and solution of simultaneous equations, some failed to
use appropriate units and some made mathematical errors.
In the final part, a description of the standard penetration test and its uses was required.
Many were able to complete only part of this, either its operation or its use.
Q7
Construction of flow nets is an easy operation for those who have practised the
technique and it was encouraging to see many had. Candidates are reminded that the
net is formed of flow lines and equipotential lines that meet at right angles and form
curvi-linear squares. Those who successfully determined uplift and buoyancy on the dam
established its uplift at several points along the base and not only at the ends.
Water flowing through the soil is to be expected however this could become problematic
if it is excessive leading to possible failure. Candidates should have described
piping/heave in part (b) and the checks involving the final equipotential drops and the
critical hydraulic gradient.
Q8
In answer to the first part of this question, many described skin friction and end bearing
adequately. Few made reference to the development of strength with deflection. In (ii)
Many were able to explain what negative skin friction was and how it placed extra load
the pile and how it could be avoided using coating or sleeves/casing in the upper region
of the pile.
The final part required candidates to explain how piles can fail as a group where the soil
between the piles is moved with the piles and how this can be checked. Several
approaches were taken, and each was marked on its own merits.
Q9
The first part was well attempted by many; this required the determination of dry density
after calculating bulk density and moisture content. Some made errors in determining
these parameters although many were successful; some confused density and unit
weight. Plots of dry density versus moisture content were generally well drawn.
Candidates are encouraged in calculations such as these to tabulate the results as this
helps avoid errors.
Part (b) required the candidates to predict the likely moisture content at which the soil
could be placed if it were to have a dry density of 95% of maximum. Candidates should
have multiplied the value obtained in the first part of the question by 95% and then read
off the plot the corresponding moisture contents.
Candidates should have explained various types of plant available for the compaction of
fine grained and coarse grained soils. Most plant work by either
reducing friction, this is appropriate for coarse grained soils
tamping/kneading the soil this is appropriate for fine grained soils
Q10
Of those who attempted this question most were able to calculate the stress path
parameters, some made minor errors. When plotting the data some used different scales
for the ordinate and abscissa axes, this led to incorrect evaluation of the strength
parameters.
In the second part the text stated that the clay was saturated, this implies that it has a ‘B’
value of unity. Candidates were then able to go on to evaluate Skempton’s ‘A’ value at
failure.
Q11
Many successfully calculated the weight of the wall and evaluated Ka. Few fully
appreciated the significance of the sloping back of the wall and as a result failed to
correctly evaluate the factors of safety. Some wasted time redrawing the cross section
of the wall in their answer book and this attracted no marks.
Q12
Of those who attempted this question, many were able to score well on the first part.
Some made minor mathematical errors, which resulted in the loss of relatively few
marks. Some got confused between resolving forces and taking moments.
If a tension crack opens as a result of movement it causes a reduction in the arc length
of the slip circle, the crack would also be likely to fill with water which would cause
further instability. Candidates should have described how this extra force would be
evaluated.
Methods of stabilising such a slope include: decreasing the slope angle, terracing,
surcharging the toe, soil nailing and ground anchors. Many were able to make a good
attempt this part.
Q13
This question was attempted by a significant number of candidates with many of them
identifying the key stages of site investigation ie:
Desk study/Data Collection
Walkover/Site visit
Ground investigation planning
Drilling, trial pits, sampling, in situ testing
Laboratory testing
Reporting
The better candidates were able to focus their descriptions on these operations to make
them relevant to a low-rise domestic construction.
In the final part of the questions, candidates were to calculate the bulk density of the
piece of soil coated in wax. The key operations in this calculation were to:
Determine volume of the wax
Determine the volume of the soil sample
Determine the bulk density of the soil sample
General Comments:
The examination comprised nine questions on distinctive subject areas within the
syllabus.
Q1
A descriptive question dealing with different aspects of traversing.
Part (b) dealt with the three types of traverse, namely: (i) The loop traverse, which
connects back to its origin thus enabling angular and linear error to be located. It
cannot, however, locate scale error, which is a major disadvantage. (ii) The link
traverse, which commences from known points and connects with different known
points, thus all error can be assessed, including scale error. (iii) The open traverse,
which does not meet either of the above conditions, thus preventing any assessment of
error.
Part (c) dealt with the assessment of error in a link traverse. The angular error is
obtained by comparing the final known bearing with its computed value. The co-ordinate
error is obtained by comparing the computed with the known co-ordinates of the final
point.
Q2
An earthworks question in which the volume of excavation from each grid square is
obtained using ‘plan area x mean height’. The heights/depths of each grid point were
obtained from the difference of the ‘before’ and ‘after’ levels, easily obtained by adopting
a value for TBM ‘X’.
Q3
This solution required the reduction of the traverse field data to point co-ordinates. A
rough plot of these values would then illustrate the configuration of the traverse, as
suggested in the question. Thereafter the computation of the length and bearing of DA
would permit the easy solution of the right-angled triangle DEA for the length and
bearing of AE.
Q4
(a) A straightforward question dealing with the control of gradients using ‘sight rails and
boning rods’. The levelling was reduced to give peg levels at P and Q. Invert levels
were then calculated, to which the height of the boning rod is added to give the sight rail
levels, from which the height above the pegs is easily deduced.
(b) Assuming a formation level value, the value of the ground levels at the upper and
lower edges of the cutting may be calculated. Using the same formation level, the staff
readings may be used to compute the same values. Where they agreed, the slope stake
position was correct; where they disagreed, it was incorrect.
Q5
It was necessary to obtain the chainage and reduced level of the intersection point (I) of
the gradients. This is easily done by means of a simple algebraic expression for the
level of I from A, using a coefficient x for the horizontal distance from A to I; equated with
a similar expression, with the same coefficient included, from B. Thereafter it becomes a
standard vertical curve computation for the required solution.
Part (a) required a knowledge of elementary geodesy dealing with the geoid and the
ellipsoid in order to understand clearly how orthometric levels are obtained from GPS
data.
Part (b) Real Time Kinematic (RTK) is the GPS surveying procedure that is most
important to the engineer as it enables co-ordinate position to be obtained in real time
and so facilitate setting-out on site.
Part (c) GPS antennae attached to appropriate parts of the construction plant supply
three-dimensional position to the on-board computer, enabling actual position to be
compared with design position already stored in the computer. The more advanced
systems actually control and guide the plant.
Q7
Required a general description of the process of aerial photogrammetry, from flight
planning for stereo viewing and complete cover; type of camera; use of appropriate
ground control to facilitate the correction of error due to tilt and ground relief, i.e.
restitution.
Q8
Part (a) required the classification of error into mistakes, systematic error and random
variate. Examples were required in each category.
Part (b) required an understanding of the important and fundamental statistics, namely,
standard deviation and standard error. The basic formulae produce their value for 68%
confidence and must be multiplied by 1.96 to give the more generally used 95%
confidence range.
Finally, as standard error equals standard deviation divided by the square root of the
number of observations, part (b) (iii) is easily obtained.
Q9
A straightforward transition curve problem for which all formulae are supplied. An
examination of the formulae shows that it is absolutely necessary and therefore a priority
to have the values of minimum radius ‘R’ (in this instance given) and length ‘L’ of the
transition (in this instance computed from design speed and radial acceleration).
The important point thereafter is to obtain the correct through chainage at the start of the
entry transition and the circular curve in order to obtain the lengths of the sub-chords at
those points, and hence calculate the correct setting-out angles.
General Comments:
Although the examination paper this year contained some questions of a less familiar
type, the distribution of the examination marks was similar to that of last year. This is
seen as indicating that those students following courses leading specifically to the
Engineering Council examination were being trained to tackle a range of types of
questions and not to attempt too much question spotting.
Q1
An easy question, but attempted by few candidates. Some candidates who failed to draw
a diagram for part (b) ended up by solving a different problem. Some attempts at using
calculus demonstrated a weakness in basic understanding.
Q2
Several candidates had obviously prepared themselves for a relaxation problem and
were determined to show this – even though it was not asked for by the question.
Q3
A not very demanding question, requiring only very familiar techniques. but nevertheless
attempted by very few candidates. Determining the directions in which the forces act
clearly proved difficult.
Q4
A regularly appearing type of question, attempted reasonably well by many candidates,
except for the final part where many assumed that all the legs of the magnetic circuit are
at the same working point on the B/H characteristic.
Q5
Part (a) was generally well done, although there were many errors over units. Part (b)
was straightforward for those who understood part (a). Many candidates failed to see
that part (c) was very straightforward, and made wild guesses at the answer. Part (d)
was again generally well done, although many candidates quoted the energy in watts
instead of joules.
Q6
Attempted by many candidates, but the answers often contained numerous trivial errors.
Candidates who re-defined the axes to simplify the problem often forgot to re-define the
function.
Q8
Few candidates attempted this question, although it only required basic circuit
considerations and there were no hidden difficulties..
Q9
An easy question, although some candidates did not read the text correctly and
attempted to solve it using mesh or nodal analysis.
Q10
A straightforward question, with many answers spoilt by trivial arithmetical mistakes.
General Comments:
There seemed to be a general increase in the standard of answers for this year over the
last two years. Candidates seem to be better prepared to answer questions involving
machines operated as part of a variable speed drive.
Q1
Around 90% of the candidates attempted this question.
Part (a). Almost all candidates correctly identified various torques in the torque-speed
curve right.
Part (b). This part was more difficult and around 30% of the candidates got it right. Many
candidates could not get right values because they were not clear about the relationship
between line and phase voltages and currents in delta connected stator. Power factor
was computed correct almost by everyone. 20-30% of candidates confused
electromagnetic gap power with developed power. The answer for input power, torque
and efficiency were different because improper usage of line currents and voltage.
Nevertheless, about 80% candidate scored more than half marks in this part.
Q2
Approximately, 80% of the candidates attempted this question and, in general, high
marks were obtained from it.
Part (a). The torque-speed characteristics were correctly found by most of the
candidates for the cases were the supply frequency was equal to the base frequency
(cases (i) and (ii)). However, many candidates were unable to sketch the torque-speed
characteristic for the case where the voltage magnitude and frequency were both halved
(case (iii)).
Part (b). Two common mistakes for this part of Q2 were the following:
Part (c). Very few candidates were able to discuss coherently all three cases in this part
of Q2. Most of the candidates provided only partial answers to the stated questions and,
worryingly, a relatively large, number of the candidates provided completely nonsensical
answers.
Q3
Very few candidates (12.5%) attempted this question although it was intended as a
straightforward question. We assume the reluctance to tackle the question is due to two
factors, a reluctance to tackle descriptive rather than numerical questions and a lack of
familiarity with induction machines fed by variable frequency inverters. This second point
seems to be supported by the poor performance in the later parts of Q2 also.
Q4
Approximately 80% of the examinees attempted Q4 and, in general, the candidates
performed fairly well. A relatively large number of candidates got full-marks in this
question.
Part (a). Most of the candidates were able to answer that a series resistance is
frequently added to reduce the starting current of the machine. Fewer candidates were
able to discuss why it is done.
Part (b). Most candidates did not found problematic to solve case (i). For case (ii), some
of the examinees failed to realise that in steady state dZ/dt=0 and this lead them to
solve a first order differential equation. This approach, although perfectly valid and in
most cases successfully carried out, made the solution procedure unnecessarily long.
Some other candidates took the armature current for case (ii) equal to the starting
current, which was a wrong choice. Fewer examinees attempted to solve case (iii) in
comparison with cases (i) and (ii), but most of those who did try to solve case (iii) were
able to find the right answer.
Q5
Part (a). Most answers were good. Weak answers talked only of the existence of rated
values without, for instance, linking the maximum current rating to the heating effect of
the current, the cooling provided and the temperature properties of the insulation.
Part (b). The question was intended to provoke a discussion of the heating effect of the
operating point versus the cooling provided by the fan. Good answers noted that the
cooling provided by the fan was much reduced a low speed and found an average
(RMS) equivalent torque for the on-off case. Several candidates were side-tracked into a
discussion of how to provide speed control. This was OK but wasted time. Solutions that
used a gearbox or simply stated that the speed or torque weren’t possible (because
perhaps the slip was too high) were marked incorrect.
Part (d). Generally, this was poorly answered with only a small number of candidates
able to relate torque and moment-of-inertia to the dimensions of the machine and very
few mentioned the need to consider the inertia of the machine relative to the load inertia.
Q6
Most candidates who attempted the question gave good answers. The numerical part of
the question was straightforward and many fully correct answers were given. Common
errors were to confuse the speed at which the back EMF had been measured (4,000
rpm) with the speed at which the machine was required to operate (3,000 rpm). Some
candidates simply used the 36V figure for back EMF throughout the question rather than
scaling it for speed. Some candidates quoted an answer for mark-to-space ratio (ton/toff)
rather than duty-cycle (ton/ (ton+toff)).
In describing the control loop most candidates chose a single loop that varied duty-cycle
according to the speed error. This obtained 3 marks. 4 marks were awarded where a
current limit was added or an inner current loop was shown.
Q7
This question was better answered than corresponding questions in previous years.
Most candidates could accurately described thyristors, GTOs, IGBTs and BJTs.
Relatively few candidates could state why snubber circuits are sometimes used.
Answers could have covered loss reduction or control of di/dt and dv/dt. Answers
mentioning both got full marks. Most candidates could describe PWM or variable
frequency control of an induction motor. Only a few candidates brought these two
aspects together into a coherent answer.
Q8
About 80% candidates attempted this question. This question tested the candidates’
understanding of rotating magnetic fields and the mechanism of electromagnetic torque
production.
Part (a). Almost all candidates were able to give a good explanation of how direction of
rotation related to phase sequence and hence the connections made.
Part (b). This part was about evaluating torque of single-phase motor given the electrical
circuit parameters and terminal conditions (current, voltage and speed). Overall the
answers showed a good level of knowledge and ability. It was interesting to see that
almost all the students obtained the correct slip which is a measure of their
understanding of induction motor operation. The forward and backward torques were
calculated correctly in a most cases. There are several possible approaches to the
calculation. More than half the candidates obtained full marks in this part and several
more who lost only 1 or 2 marks for mistakes in the calculation. A small number of
candidates mistakenly performed the calculation as if this was a three-phase case. As in
question 2, several candidates confused the electromagnetic power crossing the gap
with the developed electromechanical power. Some candidates did not use the correct
Q9
The basics of chopper circuits were generally well understood. In describing the
operation of a chopper, most candidates correctly stated the average armature voltage.
Some justified this in terms of how the chopper operates. A surprising number of
candidates did not accurately describe the role of the diode. Some described the diode
as being present to absorb the inductor energy (rather than simply providing a current
path through which inductor energy can be transfer to the armature).
Most candidates correctly identified the circuit as a single/first quadrant chopper. Some
went on to describe alternative circuits although this wasn’t strictly necessary. Full marks
went to candidates who explained (as asked) why neither the voltage nor current can be
reversed in this circuit.
Field weakening was known by most candidates but not all could clearly explain why it is
used. Some confused cause and effect when describing why the torque available is
reduced as the speed is increased. Some claimed that in weak field the torque is
increased. This was based on the transient effect that suddenly reducing the field
reduces the EMF and increases the current. These answers failed to acknowledge that
this is a transient effect and that the motor will soon accelerate to a higher speed where
the current and reduced field will yield a reduced torque.
General Comments:
Q1
Although the majority of candidates successfully answered the first part of the question,
it was disappointing that a number of students were not able to perform the
straightforward calculation of active and reactive output of a synchronous generator.
Very similar questions have been set in previous examinations. This topic is elaborated
in Chapter 3 of the recommended text.
[b – (i) Q = 115.4 MVar, (ii) 138.5 MVar, c: 74.8MVar, = 34.35o , 2p.u. d: 64MW,
= 14.85o , 2.2p.u.]
Q3
This topic is discussed in Section 12 of the textbook. This question was attempted only
by a small number of candidates and it was disappointing that the majority obtained a
relatively low overall mark. Specifically, the pricing principles are dealt with in Section
12.2. The concepts of social welfare, demand elasticity, costs, profits, marginal cost,
economic efficiency and consumer surplus are explained in this section.
[b-(i) 150kWh, £1.125, £5.25, £5.25, (ii) 35kWh, £4.4 (iii) –2.333]
[c-(i) C’ = 0.015 Q + 1.3 , 3.52 p/kWh, 148 kWh (ii) £5.21, £1.64, 2.41p/kWh]
Q4
This area is dealt with in Chapter 7 and the calculation for three-phase balanced faults is
specifically dealt with in Section 7.2. The calculation was fairly straightforward for parts
(i) while for part (ii) the delta-star transformation is required (described in Section 2.8 on
page 88). Candidates managed to score respectable marks on this question, but many
missed out on the contribution from each generator for which 6 marks were available.
[(i) 2.1kA, 14.22kA, 10.91kA, (ii) 1.88kA, 10.88kA, 11.74kA]
Q5
Transient stability and voltage stability are discussed in Section 8.3 and 5.8 respectively.
The calculation part was quite straightforward and was based on the fundamental
V1.V2
relationship P = sin . From this equation, the voltage angle can be first calculated.
X
The current is then obtained by dividing the complex voltage with the combined
impedance of the transformer and the line. Given the terminal voltage and the current
the internal voltage can be easily calculated by calculating the voltage drop in the
machine.
[1/ 23.58o , 1.02/ 11.8o , 1.06/ 34.5o ]
Q6
This topic is dealt with in Chapter 5 of the textbook, specifically in Sections 5.4, 5.5 and
5.6. Candidates are required to be able to carry out a simple voltage drop calculation
across a reactive circuit, based on the approximate formula 2.9. Calculation of the
capacitor bank size is found from the condition that voltage drop needs to be
compensated.
[(i) 9.6kV, (ii) 10, (iii) 11MVar]
Q7
The topic of power system protection is dealt with in Chapter 11. Current and voltage
transformers are described in Sections 11.4.1 and 11.4.2. Overcurrent and directional
overcurrent relays are described in Sections 11.5.1 and 11.5.2, while distance protection
and differential protection schemes are discussed in Sections 11.6 and 11.7.
Q8
Switching surges and the issues of interrupting short circuit currents are discussed in
Sections 10.2.2 and 10.2.3. The candidates were required to appreciate that if the
energising circuit breaker chops the current flowing into the transformer, an energy equal
to 0.5Li2 will remain within the magnetising inductance of the transformer. This energy
will oscillate between the transformer capacitance to earth and the magnetising
inductance at the resonant frequency. When the energy is transferred to the
capacitance, the phase to earth voltage will be determined by 0.5Cv2. As C is much
lower than L, the voltage will be significantly higher. It is important to note that no natural
current zero occurs in a DC system and the interruption of DC currents can therefore not
rely on removing arc conductivity/increasing dielectric strength of a gap during a natural
current zero. A current zero must instead be forced by increasing the circuit breaker arc
voltage.
[23.5nF, 3.23kV/µs, 39.2nF].
Q9
A number of candidates attempted this question with varying degrees of success. Very
similar questions were asked in previous examinations and from that perspective it was
disappointing that a number of students obtained a very low mark. The power-frequency
characteristics of interconnected systems is discussed in Section 4.5. Specifically,
frequency regulation of systems connected with weak links is elaborated in Section 4.6,
together with an example, which is very similar to this question. Similar exercises can be
found in Problems Section (4.5, 4.6 and 4.7) at the back of Chapter 4.
[2348MW/Hz, 49.88Hz, 49.72Hz, 50.07hZ]
General Comments:
Disappointingly only 50% of the candidates sitting the examination achieved a pass
grade, which is down from the 55% pass rate achieved in 2002 and the 60% pass rate
achieved in 2001. 14 % of candidates obtained a pass at Grade A or B compared with
the 20% obtaining these Grades in 2002. However it was still an improvement on the 9%
obtaining these Grades in the 2001 examination. It is exceedingly disappointing to report
that some 44% of the candidates obtained a Grade F. Most of the candidates still
perform poorly in the questions that are design oriented or that are not closely related to
examples which they have come across in the past. Centres are again encouraged to
expose their candidates to more material containing analogue and digital design
examples, as this area is expected to play an increasing role in future examinations.
Q1
This question tests ability to design combinational logic circuits given simple
specifications and the understanding of a simple full adder circuit.
The question also tests the ability: -
(i) to apply Karnaugh map reduction techniques, and
(ii) to implement logic circuits using digital multiplexers.
Some 85% of candidates attempted the question and many gained good marks with
almost half of the attempts attracting 14 or more marks.
The combinational circuit in part (a) is a simple three input, three output circuit and a
large proportion of the candidates were able to derive the correct Boolean equations for
the logic circuit. However very few candidates were able to show how the circuit could
be implemented using a full adder circuit and an inverter gate.
Most candidates were able to implement an appropriate 8-to-1 multiplexer for the four
input, one output specification given in part (b). However many candidates showed they
did not understand multiplexer operation when they were unable to reverse the process
and produce accurate Boolean equations when given 8-to-1 multiplexer input conditions
in part (c).
Q2
This question tests understanding of the basic characteristics of an operational amplifier.
The examiner is surprised that less than 20% of the candidates attempted this question
as part (a) and most of part (b) could be considered to be textbook material. Few of the
candidates who attempted the question performed well with only one candidate
identifying that the amplifier as shown in part (d) was connected in a positive feedback
mode and as such would be unstable.
Numerical answer: (i) gain-bandwidth product = 10MHz
(ii) closed-loop bandwidth = 25kHz
(iii) max. output peak voltage = 3.98V
Q3
This question tests ability to analyse a feedback circuit employing an operational
amplifier and the understanding of the necessary conditions to maintain oscillations in a
circuit. Around 50% of the candidates attempted the question with many gaining good
marks but many others very poor marks. The good attempts showed appropriate
analysis to derive the relationship between the frequency of oscillations and the circuit
components required in part (a) and deduced the minimum value asked for in part (b).
Only a few candidates estimated a correct value for R in part (c) with most candidates
going directly to Fig. Q3(b) with the 3V value instead of determining what 3V represented
as a voltage across Rf. A few candidates deduced the relationship asked for in part (d)
but many others could not handle the equations to derive the required relationship.
Numerical answer: (a) fo = 1 / 2 C1C2R1R2 (b) Rf. / Rmin = 2
(c) R = 12.5 k (d) Rf. > 0
Q4
This question tests understanding of transistor circuits and the ability to perform simple
design calculations. Overall the question was quite well answered with it being attempted
Q5
With the first part of this question testing standard negative voltage feedback theory it is
not surprising that it was attempted by 75% of the candidates. In general the textbook
work of part (a) was very well answered. A few candidates still get mixed up with polarity
and made no comments when their analysis produced negative signs in the expressions
deduced for the amplifier input and output resistances.
In part (b) only about half of the candidates managed to correctly calculate the feedback
ratio of the operational amplifier but most of these candidates then obtained full marks
for the part. The feedback ratio of 0.5, which does not take the amplifier input
impedance into account was the value obtained by most candidates and this was
accepted as correct. The few candidates who attempted to take the input impedance into
account in their calculations, were rewarded for any correct analysis.
Part (c) tested ability to understand the effect of feedback on the gain of an amplifier and
to perform gain calculations when values are expressed in dB’s and as percentages.
Overall the candidates produced poor attempts to this part of the question and only a
few candidates obtained full marks for their attempts. Many candidates produced one or
two pages of calculations with no clear statements of what they were attempting to do. A
simple diagram at the start of their working would help illustrate what they are attempting
to do. Many of the attempts failed to take into account that the gain of each of three
stages was to be allowed to fall by 10%. Some attempts gave overall gain values for
10% increases in internal gain, which was not required.
Numerical answer (b) input impedance = 7.5 × 1010 , output impedance = 1.2 m
(c) minimum open loop gain = 77dB and feedback ratio = 0.486.
Q6
Part (a) is textbook work while the part (b) and part (c) test understanding of the
operation of simple circuits employing semiconductor diodes.
Approximately 40% of the candidates attempted this question and their performance on
the question was mixed with few candidates performing well and many others quite
poorly. In part (b) many candidates just wrote down mesh equations on the assumption
that both diodes were conducting and obtained output voltages in excess of 5V. Other
candidates deduced an output of 0V for both inputs at 5V. In part (c) few candidates
obtained the correct output waveform with many candidates ignoring the 0.6V knee
voltage in their calculations. It is disappointing to report the overall poor performance in
dealing with what are relatively simple basic circuits.
Numerical answer: (i) vo = 5V (ii) vo = 0.859V (iii) vo = 0.733V
Part (a) was well answered by most candidates, however in part (b) a number of
candidates were unable to perform the simple A-to-D converter calculations. A number
of candidates gave the maximum conversion time of the A-to-D converter as the answer
to (ii) and if their calculation was correct they were credited with the marks as the
question did not precisely ask for the 3.728V conversion time. In part (c) candidates
were able to reproduce appropriate information on data acquisition systems but few
candidates mentioned analogue switches, the main component of an analogue
multiplexer, in their responses.
Numerical answer: (i) digital input = 0101110101
(ii) conversion time = 373 µsec
(iii) resolution = 10mV or 0.1% of full scale
Q8
This question tests knowledge of flip-flop operation and the ability to analyse and design
simple synchronous circuits using J-K flip-flops. Over 80% of candidates attempted the
question with many producing good answers and around a quarter of them gaining 16 or
more marks for the question.
Part (a) which tested a candidates knowledge of edge-triggered S-R flip-flops was very
well done by many candidates although a number of candidates initiated the flip-flops
from the set condition when the question stated that the flip-flops were initially reset.
Unfortunately the waveforms of Fig. Q8(b) contained a small ambiguity. Changes to the
Q output waveform should all have been in time with the 1-0 edge of the clock pulse.
The first two transitions of Q are correct in the diagram but the next three transitions
wrongly take place just before the 1-0 transition of the clock pulse. The question asked
candidates to check if the flip-flop was operating properly and if not identify a probable
fault. Candidates who identified that the Q output was not changing on the clock edge
and suggested a timing or synchronisation problem were given the marks for the part.
Some candidates did took the Q output to be operating in time with the 1-0 clock edge
throughout and correctly identified a fault on the 5th clock pulse when according to the J
K inputs the Q output should remain in the reset condition. If the J input was open circuit
it would produce this fault condition.
Many candidates who were able to correctly interpret the sequential design specification
of part (c) went on to produce an appropriate schematic diagram. However many other
candidates were unable to make any progress with the specification and many other
candidates made simple errors in the process of producing a schematic diagram.
Q9
The first part of this question tests (a) understanding of the small-signal equivalent circuit
of a FET and (b) the ability to perform an amplifier design calculation. Part (b) of the
question tests understanding of MOS transistor behaviour while part (c) and part (d)
tests knowledge on the topic of CMOS logic circuits. About 25% of the candidates
attempted this question and generally the attempts were poor with only a few candidates
gaining even half marks. Some candidates produced good answers to part (a) but a
Q1
A popular question generally scoring well.
Part (a) & part (b). Most candidates display good knowledge of the multiplex structure of
the PDH. Many, unfortunately did not address the question asked which required an
explanation of the specific design features with reference to digital telephony. It is, of
course, the fact that the basic channel parameters, and thus the entire subsequent
multiplex structure, are based upon the sampling and quantisation requirements of a
3.4KHz telephone channel. This results in the basic 64Kb/s bit stream which combines in
groups of 30 channels to form the 2.048Mb/s level 1 multiplex.
Part (c) & part (d). SDH is also well understood except that only a minority of responses
quoted the rather obvious difference in switching methods – circuit switching (space and
time switching) in PDH, packet switching in SDH, together with all the concomitant
advantages of packet switching. Level 3 PDH carries 30 x 4 x 4 = 480 speech channels;
STM-1 SDH carries (270-9) x 9 = 2349 channels.
Q2
Also a popular question with a majority of marks exceeding the pass level.
Part (a). The first part of this question required only a simple statement of the sampling
and quantisation requirements of a speech signal plus a comment regarding the benefits
of non-uniform (logarithmic) quantisation or some other simple bit-rate reduction scheme
(e.g. delta modulation).
Part (b) Channel coding matches the characteristics of the signal to those of the channel
or transmission medium to achieve best possible BER and data transparency. HDB3 is a
well-used example.
Part (c). Most candidates can quote the layers in the ISO-OSI model and are aware that
physical signals are appropriate to the lowest 3 layers.
Q3
Less popular question, nevertheless generally well handled.
Part (a) and part (b). Most responses were aware of the need for some sort of protocol to
control the sharing of access by a number of data terminals to a common data network.
Many also, could describe a simple ALOHA scheme together the attendant problems of
data loss due to concurrent transmission.
Part (d). Maximum waiting time is the sum of station latency, token length and round trip
propagation delay and amounts to 5060ns.
Q4
Part (a). Integrated Services Digital Network was one of the first dedicated schemes for
transmission of digital data over the subscriber local loop.
Part (b). The 2B+D structure of the basic rate access is well understood but few
responses were able to describe how ~192Kb/s data streams could be multiplexed into a
SDH container.
Part (c). The basic rate bit rate is constrained by the dispersion and loss associated with
the copper wire-pair used for local loop connection but can nevertheless offer services
like internet access, fax, video conferencing etc not available via POTS.
Part (d). Higher bit rates are catered for via the primary access (e.g. 30B+D) or variants
on the DSL.
Q5
Very few responses to this question, generally scoring poorly.
Part (a). Not many years ago most candidates were aware of the need for traffic auditing
as a precursor to network design particularly in the case of exchange systems. This
year responses to this question were poor.
Part (b). In contrast many candidates have a clear perception of the stages involved in
packet data transmission viz: A-D conversion (including source coding), packetisation
(including addition of header and control data), packet buffering to allow for statistical
fluctuations in packet reception, de-packetisation, D-A conversion. Total delay should be
kept to below 300ms (typically ~100ms) to avoid it being noticeable to users.
Part (d). No correct answers to this section: P(m>k) = (`/µ)k+1 = (0.4)11 = 4.2x10-5
Q6
Part (a). Q6 was attempted by the majority of candidates. Most candidates understood
that external noise arises from outside the receiver whilst internal noise arises inside it.
The most popular examples of internal noise given were thermal noise and shot noise,
which are correct. (Several candidates wrote shot noise as short noise, which was
accepted as a slip of the pen –although candidates are reminded that care must be
taken so as not to lose marks through typographical or careless errors.)
Part (b). Very few candidates attempted this part of the question and of those that did a
substantial number were clearly guessing. For full marks the required sketch needed to
Part (c). This noise system calculation was well answered by most candidates, many
arriving at the correct numerical answer of 1119 K.
Q7
Part (a). Most candidates answering this question gave only relative values (more, less
etc.) of attenuation, bandwidth and interference immunity and this was accepted (making
the question a little easier than had been originally intended). Some candidates ranked
the relative immunity to interference in exactly the wrong order (saying that twisted pair
was highest and optical fibre lowest). I suspect many of these candidates thought they
were ranking susceptibility to interference rather than immunity but since the word
immunity was reproduced by these candidates in their answers no interpretation was
possible and therefore no leniency could be given. This is another example of students
losing marks by not taking care with answers. (It is possible that some of these
candidates did not understand what was meant by immunity but it is not felt by the
examiners that this is an obscure, or unreasonable, use of language in an English
medium examination.)
Part (b). The meaning of intersymbol interference was adequately explained (as an
overlapping of adjacent signalling pulses in time) by many candidates who correctly
referred to dispersion as its cause. (Many candidates illustrated the effect with a sketch
rather than describing it in words.) Relatively few candidates could explain why the
resulting error rate is said to be irreducible, however. (It is because the error rate cannot
be improved simply by increasing signal power.)
Part (c). There were some good explanations of the HDB3 line code, many candidates
giving the detailed coding algorithm and also listing its advantages. Also the majority of
candidates gave a correct HDB3-coded signal representation of the given bit sequence.
(Many ignored the instruction to assume that the string of four zeros was substituted with
100V and showed the result for a substitution by the alternative sequence 000V. This
‘error’ was condoned but is yet another example of candidates risking losing marks
unnecessarily.)
Q8
Part (a). A surprising number of students attempting this question struggled to give
precise (unambiguous) definitions of the quantities asked for in part (a). This was often
the case even when the same candidates went on to use these quantities properly to
find a correct answer for part (b). To obtain both marks for the definition of G/T
candidates had to be precise about the gain being expressed as a ratio (if division is
used to give the quantity in K-1) or about the system noise temperature being expressed
in dBK (if subtraction is used to give the quantity in dB/K).
Part (b) was generally well answered, many candidates arriving at the correct answer of
14.5 dB for the downlink SNR.
Q9
Q9 was relatively unpopular. Part (a) provoked better answers than part (b) possibly
because ADSL is a relatively new technology and, therefore perhaps, not yet covered in
sufficient depth by many taught courses. The descriptive nature of this question probably
discouraged those students from attempting it who were confident of their ability to
answer more quantitative questions and encouraged answers from weaker students who
were struggling to find sufficient questions on the paper that they could seriously
attempt.
General Comments:
The most popular questions for the UK candidates were Q3 on FMEA (8 attempts) and
Q6 (8 attempts) on Continuous improvement followed by the attributes charts Q4 (7
attempts). The overseas candidates preferred Q6 (21 attempts) followed by Q3 and Q2
on Costs (both 18 attempts). The UK candidates preferred Q9 on ISO 9000. Q1 on
metrology was not answered by many (similar to last year) (11 attempts) but other
questions (Q5 on Six Sigma, Q7 on Lognormal probability plotting and Q12 on House of
Quality elicited a similar number of responses.
Some students performed worse on new or novel techniques or methodologies, e.g. Six
Sigma and lognormal probability plotting.
Overall, there was no marked difference in the questions answered by the UK and
overseas candidates.
Q1
Candidates knew metrological errors but could not differentiate between those which
could be eliminated and those they could not. Evidence of tests not known about
overseas, some knowledge of measurement of major diameter and the use of the
micrometer.
Q2
Some candidates mistook Juran methodology for Crosby, Taguchi and Deming. Some
knowledge of PAF and models shown however overseas candidates seemed to be
unaware of process cost modelling.
Q4
Warning lines were not drawn on the c chart and only the basic out-of-control conditions
were identified. Some incorrect formulae utilised, i.e. p-chart and X-bar R chart.
Q5
No understanding of the difference between TQM and Six Sigma shown by both sets of
candidates. Only three candidates produced a Cp, Cpk metric. There was a limited
knowledge of what champions (executives) do in an organisation.
Q6
Common mistakes included plotting the wrong variables, not including the Pareto plot or
plotting the Lorentz curve on the incorrect scale. Some description of quality
improvement methodologies but examples of application were limited. Some discussion
of cause and effect diagrams and Quality Circles but not in the context of an
improvement framework.
Q7
No correct answers to part (a). Candidates seemed to have attempted this question on
the assumption that it was similar to a Weibull plot. Plots generally were ok. Most could
not answer the calculation parts of the question.
Q8
Most of the block diagrams did not show the battery configuration correctly. Three
candidates assumed a parallel configuration when it was series. Some corrective action
carried out but not explained why.
Q9
Two excellent analyses of the House of Quality. Some discussed theory which was not
asked for.
Q1
The course reader by Wild provides materials for the study of capacity planning and
control. Strategies may be built around capacity adjustment or load adjustment and in
this case a break even quantity of 5000 units applies to part (b) (i), with 12000 units
required to justify the investment in part (b) (ii). Since the plant capacity is a maximum of
10000 units it is not advisable to make the investment.
Q2
Approaches to simulation include problem definition, mathematical modelling,
programming, processing and evaluation. The Monte Carlo technique makes use of
random numbers to randomly select the number of jobs arriving during, in this case, any
Q3
Manufacturing Strategy by Hill and other publications explain the classification of
manufacturing processes with respect to volume output. For example job or unit build
relates to purpose built items and lower volume, with continuous process relating to, for
example, petrochemicals. FMS derives benefits in medium volume, batch manufacturing,
extending, for example, CNC productivity by incorporating flexible programming,
handling and families of products.
Q4
Solutions for analysing network scheduling questions should be laid-out carefully using
either of the methods identified to avoid mistakes. Careful progression through the
network will derive solutions, in this case an initial critical path of ADGJ, leading to a
critical path for part (c) (vi) of ADFI with duration 15 days.
Q5
Human resource strategy seeks to exploit people to serve organisational aims and
objectives, preferably through developmental and conservation processes that value
personal aspirations. Stages would include sourcing, induction, development, feedback
improving performance and reward. Other components include discipline, health &
safety, communication, participation, leadership, rights and termination.
Study of work by Maslow and Herzburg provides background for understanding the
principles of motivation, with the other issues being covered in many quality and
operations management texts. Key to job enlargement and job enrichment are concepts
of horizontal and vertical expansion respectively.
Q6
This question requires a well structured answer that should include reference to current
manufacturing improvement techniques, including supplier relationships, single minute
exchange of dies (SMED), total productive maintenance (TPM), quality tools ( SPC,
circles, cause & effect analysis, Pokayoke, Taguchi) and cultural personnel issues.
Q7
Definitions for queuing models terms and analysis are to be found in course readers. In
this case, application of standard approaches gives an average queue length of 1.33
people, an average waiting time of 1.6 minutes and a probability of 0.67 that one or more
customers are in the system. Improvements could be made through increasing service
rate by providing two servers or configuring a single channel, multi-server system
triggered by queue length.
Q8
The works of Ishikawa and Toyota’s Ohno give the background for answering this
question effectively. A fishbone diagram would contain wastes themes of over
production, waiting, transporting, inappropriate processing, unnecessary inventory and
motions and defects. Techniques proposed for improvement could include process
charts, Pareto analysis, control charts and Kaizen studies.
K= {D (W+P) (1+s)} / Q
Where:
K = number of Kanbans
D = demand per hour
W = waiting time
P = processing time
s = policy variable
Q = container capacity
Q1
Various key points that should be included in each section are:
New-build/refurbishment and civil engineering/building.
Key participants are: client/customer, architect, designer and contractor (and sub-
contractors).
Common procurement methods are traditional (i.e. competitive tendering) and design
and build.
Work force may be divided into professionally trained engineers and mangers and skilled
and semi-skilled craftsmen whose employment is often temporary.
Nature of demand is driven by the prevailing economic climate and is characterized by
“boom and bust”.
Q2
The estimating and tendering process includes: collecting the relevant cost information
for labour, plant materials and sub-contractors, studying the contract to determine the
most appropriate methods, preparing the estimate and making allowances for overheads
and preliminary items. The strategic decisions taken by the contractor include the
Unit rate estimating involves the determination of an all-in-rate for a unit of construction
(e.g. 1 m2 of single skin brick wall) and operational estimating involves the determination
of the cost of the whole operation.
Q3
Cost control is vital, as without it, corrective action cannot be taken if problems are
encountered.
There is no unique answer but reference should be made to: the way costs are
allocated, the way in which data can be collected, the variance which might typically be
expected to occur, the use of computer software, the way in which data are analyzed
and interpreted and the timeliness of feedback.
Reference should be made to difficulties encountered when carrying our the procedures
highlighted in part (b) above (for example, how can data be collected for partially
completed items).
Q4
Method statements are important because they enable a comparison of different
methods to be made and the most appropriate one selected. Any previously unforeseen
difficulties should become apparent.
Information needed includes: details of activities and their planned durations and
resource requirements, details of any temporary works required and details of the
construction site, its boundaries and proposed layout.
Q5
In addition to the definitions required a graphical representation of each definition, based
on the S curve would be useful. It is important to consider how the measures can be
combined and subsequently interpreted (for example, consider the difference between
budget and cost).
Before an analysis of the probable causes of the situation described can be carried out it
is necessary to interpret the variances (for example, have we over-spent or under-
spent?)
The important aspects to be considered include compatibility of the data collected, often
for other purposes, with the earned value system, timeliness of data collection and
reliability/validity of the data which have been collected.
Q6
Critical path methods should be compared and contrasted with other techniques such as
bar charts and line-of-balance, highlighting the advantage and disadvantages of each
method.
Q7
It would be useful if candidates provided a definition of performance at the beginning of
the answer; it is expected that performance would embrace issues of time, cost, quality
and safety. Candidates from the UK may make reference to Key Performance Indicators,
a Government initiative, but this is not essential. The answer, however, should include:
defining what is to be measured; how the data are to be collected; how the data are to
be analyzed and interpreted; and any feedback mechanisms.
Q8
In describing the procedures a contractor may undertake in purchasing materials,
candidates, in addition to the cost of materials, are expected to address other issues
such as quality and delivery schedules and, ideally, make reference to partnering
(between the contractor and material supplier) and material performance standards.
The answer should include reference to material storage procedures and minimization of
waste.
Q9
Initially candidates should describe the type of plant for which a selection methodology
would be appropriate, before addressing the issue of plant selection. Both financial and
technical considerations may be addressed, but emphasis should be given to the
technical aspects such as output rates and plant capacity.
Factors which should be taken into account include: construction project duration,
building height, type of materials handling required, crane availability, ability of provide a
foundation, and anticipated ease of dismantling.
General Comments:
The following comments are for consideration by Examination Board and associated, not
for candidate consideration
The examination produced a broad spread of marks indicating that the paper
successfully discriminated between those candidates who had and had not grasped key
elements of the broad management subject area. The marks ranged from 18 to 81, with
a mean of 50.2 and a standard deviation of 13.8.
The most popular single question was based around project management, which also
produced the mean (12.1) and narrowest standard deviation (2.7). Candidates
undertaking Q9, on globalisation, were on average almost as successful (mean 11.5)
A unit error on one subsection of Q6 (worth a single mark) was dealt with by awarding a
mark to all candidates undertaking such, so that they were not disadvantaged, indeed
many had noted the error within their answer!
Due to the nature of the Management subject area for those questions not directly
involved with numerical calculation it is not possible to provide definitive answers.
However, candidates would note that there are usually key points at the core of an
answer to be addressed and a number of other points that may contribute to a lesser,
but still valuable extent. As such the following information is provided within this caveat.
Overall most candidates attempted five full questions and as such gave themselves the
opportunity to score marks out of one hundred, rather than a fraction of. The poorer
performing candidates did not address the question set and had poor underpinning for
their answers.
Q1
(a) Within this question candidates were asked to consider explicit indicators of
organisational strategy and their purpose in communication of the strategic direction of
the organisation. As such candidates may have included Vision, Mission, Values,
Objectives or Goals in their answers. Stronger candidates would be able to explain the
purpose of each and their relationship to each other.
(b) There are a number of possible processes for the formulation of policy or strategy,
but generally they will address the following:
• An assessment of the general business environment, considering the macro
level.
• A consideration of the industry specific issues, such as lifecycle and competitive
forces
• A comparative analysis of the competition, looking at issues such as financial,
physical and human resources, competences and skills, alliances and
partnerships and historical performance issues;
• An integration of the information to identify possible competitive moves, risks and
returns;
• A consideration of the arising options for the organisation and screening of such;
• Selection of option(s) and outlines for development towards this i.e. setting out
the mission, objectives, values etc, defining milestones and key performance
indicators to monitor progress;
• Consideration of issues such as systems, structures and change requirements
(b) Candidates here were asked to combine two key areas and recognise the importance
that leadership plays within the workplace. A number of candidates appropriately
referred to motivational theory (eg Maslow, Herzberg) in developing their answers. Better
answers focused specifically on leadership rather than management actions in general
and clearly understood the distinction between the two (and of course where the overlap
allowed inclusion). No specific leadership theories were required to be cited but should
have been implicit within the answer.
A range of points may, non-exhaustively, included:
• Providing focus for efforts
• Clarifying roles
• Recognising achievement
• Setting an example
• Inspiring
• Setting objectives
• Developing individuals through delegation with associated support
• Developing a team of complementary individuals
• Providing support/ empathy
Q3
(a) Key points arising from this are outlined below. A number of candidates focused upon
the individual rather than the organisational benefit in answering the question, which was
inappropriate. Answers may include
• Feeds into an overall performance management system, linking individual efforts
with groups and ultimately organisational performance
• Provides a formal framework for issues of workplace performance addressing a
range of areas
• Helps to plan future efforts and review past issues to improve overall
performance
• Assists in motivating employees through; recognising achievement, agreeing
goals, creating win-win situations between employee and organisation
(b) There were many acceptable points within this section, both relating to the
preparation for the process (as below) and during the process also – such as trust,
empathy, communication and mutual respect. Post meeting points included issues such
as recording and agreement. Possible points included
• Inform the appraisee of the reasons for the appraisal
• Inform the appraisee of the format of the appraisal
• Set a mutually agreed date
• Allow sufficient time for the appraisee to prepare
• Ensure that all relevant information is gathered, such as previous appraisal
documentation prior to the meeting
• Select a place for the appraisal that ensures nothing disturbs the process
• Ensure the sufficient time is made available for the appraisal
• Supply prior information in documented format and allow time for clarification if
required
(c) Following the HR related theme the following were acceptable areas for inclusion;
work study; Profit related pay, Group bonus systems, Individual reward such as bonus or
pay-rise, Extra holiday, Training allowance, Educational allowance, Employee
recognition events, Promotion schemes, Salary scale schemes, Job enrichment, Job
rotation, Job design.
Q4
(a) Candidates should have recognised the wider role of the Marketing function within an
organisation , as opposed to the narrower Sales or Advertising roles which are often
subsumed within such. Answers may have included
• Conducting market research into product design requirements
• Providing information for production decisions such as volume and variety
• Promoting the products and services of the organisation
• Identifying opportunities for business development, such as market gaps
• Identifying the strengths and weaknesses of the competition
(b) Classically the Marketing Mix consists of Product, Price, Promotion, Place. Some
students appropriately picking up on contemporary developments within the subject field
also recognised the now widely accepted fifth P ‘people’ for ‘The Marketing Mix’.
Example issues included distribution channel appropriateness (place), features
(product), premium or budget (price), communication tool selection (promotion)
(c) The smooth curve of the product lifecycle should have been broken down into the
following sections as a minimum, though some better candidates were also able to
introduce further detail such as embryonic, early maturity/ late growth (or shake-out)
phases. Appropriately a number of candidates plotted typical cashflow characteristics
within the lifecycle and added the following points and recognised the issues dictating
inflow and outflow at respective phases.
(b) A number of candidates appropriately referred to the work of some of the quality
gurus, such as Deming in addressing this question. In general terms areas required
considerations such as
• Requirement for Education
• Requirement for Training
• Inclusion of all employees
• Development of appropriate organisational culture
• Development of ‘hard’ issues such as information/ control systems
• Use of performance monitoring to support and inform decision making
• Possible use of techniques/ models such as benchmarking/ business excellence
to support the process and add structure
Q6
(a) A number of candidates correctly identified the “time, cost, quality” triangle and the
traditionally accepted viewpoint of maintenance of two at the expense of the other.
(b) A number of points were accepted in relation to the following, which clearly explained
benefit such as
• Identifying priorities
• Indicating ‘float’ and potential to adjust start/ finish time of activities
• Indicating ‘critical’ operations, which if delayed will have an impact upon the
overall time for the completion of the project
• It visually represents sequencing and dependency aiding interpretation
• It may be easily translated into specific plans for project resources
(c) A range of notation was accepted in the answering of this question, with many
candidates providing a key for clarity to avoid confusion. The largest single problem
appeared to be in the identification and impact of the dummy activities. The critical path
5/6
D=5 J=3
7/7 11/11
G=4
There was one issue that had an impact upon the performance of candidates, the final
1pt part of the question asked candidates to suggest how to ‘crash’ project to enable a
one week advantage. As the units were given in hours this was not possible and as such
all candidates who undertook the question were awarded a mark accordingly.
Q7
(a) Candidates were merely asked to define the accounting terms. Reserves appeared to
be the most problematic for candidates, who confused issues of availability of the
finance at a later point in the company history.
(b) The figures given represent rising costs across the organisation, as would be
expected in a positive inflation climate. However, there were several items that appeared
to be rising at a disproportionately high rate. Candidates were generally successful in
identifying the main issues, but often lacked sufficient insight to suggest possible
reasons and resulting impacts. Areas outlined below.
• Increasing value of stocks tie up cash
• Increasing debt leading to rising bank charges
• Rising materials (over inflation/ activity level) costs depress gross margin
• Wages are increasing disproportionately to turnover and depressing net margin
• Lease hire increasing disproportionately
(c) This dealt with the possible oversimplification of financial planning approaches given
potentially complex organisational operational situations. Points arising included:
• Does not take account of variation from budget in direct costs
• Real situation may not have linear costs
• Practically identifying, separating and accurately estimating fixed and direct costs
may be problematic
• Product mix changes invalidate the charts for organisational-wide application
• Sales are not necessarily achieved at the same price across the range
• Fixed costs may rise when production is nearing capacity, due to requirements
such as additional supervision.
(c) A simple calculation was required for this, which most students undertaking achieved
successfully, this was in part related to ideas examined in the previous section. Answers
were.
3000 batch cost per unit = £11.83
5000 batch cost per unit = £ 9.50
10000 batch cost per unit = £ 7.92
Q9
(a) Candidates argued points successfully from both the facilitating and restricting angles
with respect to the rise of globalisation. Although none cited specific theorists within the
area (e.g. Yip), many made strong points to the nature of the following:
(b) Looking at the area of development strategies the following indicate key areas for
inclusion:
• Direct exporting into foreign markets
• Utilise agents for representation and distribution in foreign markets
• Set up subsidiary organisations in foreign countries (Foreign Direct Investment)
• Undertake strategic alliances with complementary organisations to export/
develop markets or facilities in foreign countries
• Franchise operations
• Through mergers and acquisitions
(c) This could be linked specifically to the previous works within the question to underpin
and included, non-exhaustively the following points:
• Locate functional operations in low cost areas
• Extend product offering geographically to extend product lifecycle and maximise
development payback
• Market more broadly and benefit from economies of scale in production
• Utilise international expertise across organisational functions (e.g. US marketing,
UK design, Indian Software)
• Controlled transfer of technical expertise into developing nations through
partnerships or licensing agreement to maximise payback from product/
knowledge
General Comments:
The style and standard of the examination was similar to that of previous years. 9
questions were set; candidates had to answer 5.
The average mark and the percentage pass rate were considerably higher than in 2002.
There were marked differences in percentage pass rates between different centers.
Many candidates were inadequately prepared for the examination. They appear to be
familiar with a limited number of topics only. Thus some questions or parts of questions
are totally inaccessible to them. On this paper, Q4(b), Q7 and Q9(a) were almost
Many candidates were restricted to a narrow range of techniques and did not have
sufficient flexibility to adapt when confronted with an example that did not match their
previous experience. In fact some seem to believe that they can answer their own
question when they cannot answer the question as set. For example in Q6 many used
a different factorization to that requested. In Q7, some used an analytical method rather
than the numerical method required. They should note that they would receive no credit
for this.
Many candidates were let down by their lack of skills in differentiation (Q2, and Q3(b))
and integration (Q5(a)).
Some candidates' scripts were almost illegible. Many candidates gave no reasons for
what they were doing and consequently their solutions were difficult to follow. Clearly
scripts must be legible and solutions must present clear mathematical arguments if
candidates are to receive the credit due for their work.
Q1
Mathematical methods (Vector calculus including theorems of Green, Stokes and
Gauss)
Part (a) was reasonably well done. However many candidates were not sufficiently
familiar with vectors to obtain correct answers to part (b) and part (c); in part (b) they
dx dy
could not find the normal and in part (c) did not know that dS = .
| n.k |
Q2
Mathematical methods (Functions of 2 or 3 variables, maxima and minima, Lagrange
multipliers)
(a) Many candidates obtained incorrect answers due to incorrect differentiation. Some
were unclear how to identify stationary points. In particular, the fact that both first partial
derivatives have to be zero simultaneously.
(b) Many candidates knew the method required but made mistakes in the algebra.
Q3
Mathematical methods (Complex variable theory; analytic functions, Cauchy-Riemann
equations; conformal transformations)
Most candidates who attempted this knew the methods required. Two major problems
were not recognizing the equation of an ellipse in part (a) and mistakes in differentiation
in part (b) and part (c).
(a) A common problem here was an inability to do the required partial fractions.
(b) Again this topic was mostly avoided and appeared to be understood by very few.
There will be more questions like this, i.e. on the use of the z-transform and its
application to the solution of difference equations (see the mathematical methods part of
the syllabus for more detail).
Q5
Mathematical methods (Solution of second order partial differential equations by
separation of variables. Fourier series with application to initial value problems)
This was well done by many candidates, although mistakes in integration were common
in part (a). In part (b) most did not attempt to sketch the graphs and very few had any
idea what they should look like. Engineering students should understand the basic
connections between physical problems and their mathematical representation.
Q6
Numerical Methods (Solution of sets of linear equations, matrix factorization methods)
Many candidates did not understand the term ‘unit diagonal’ and tried to find a different
factorization to that required. Another common problem was that many did not know how
to use the factorization to solve the equations in part (b) and used a different method
instead.
Q7
Numerical methods (Finite difference methods for partial differential equations)
This question was largely ignored. Some of those who attempted it used analytical
methods and not numerical methods as requested.
Q8
Statistics (Binomial, Poisson and Normal distributions, confidence limits)
Although popular, there were many poor attempts to this question. Common problems
were that most candidates seemed unaware of:
The use of the Poisson distribution as an approximation to the Binomial distribution
The method of calculating confidence intervals for proportions.
Q9
Numerical methods (Matrix eigenvalue and eigenvector determination)
Statistics (Recursions and Markov chains; applications to queuing theory)
Serious attempts to this were few, suggesting that most centres are not teaching this.
Q1
Virtually everyone who attempted this question used a graphical solution. Most of these
candidates had the right idea and produced the imbalance couple (moment) diagram
(polygon) first to determine equivalent rotor imbalance at a single plane and then
proceeded to use the imbalance force diagram (polygon) to determine the equivalent
rotor imbalance at the other plane. Some candidates presented the force diagram first
and became confused about how to deal with the two unknown vectors in that.
A very common error was that the candidates interpreted the question to be asking
about the reaction forces at the bearing locations (which were inferred to be at the ends
of the rotor) instead of calculating the corrective imbalance values which should be
applied at planes A and C.
The fact that the imbalances at planes B and C were co-linear (1200 and -600) caused a
little difficulty - at least in the correction. The couple (moment) diagram (polygon) for the
equivalent imbalance at plane C all lay in a single line if couples (moments) were
evaluated about plane A. This will be a point to avoid in future years.
The fact that the average mark landed just below (8/20) implies that this question was
fairly reasonable.
Q2
The presentation of Q2 puzzled everyone who attempted it. Underneath, it was about a
very simple torsional system comprising 2 inertias. The candidates effectively had to do
3 things: (a) “refer” some inertias through a geared interface, (b) combine two series
torsional springs into a single spring and (c) compute the frequency of the non-rigid-body
mode.
Part (b) was handled fairly well. Part (a) was not done properly by anybody and this is
possibly due to difficulties in interpreting the question.
In retrospect, some statement should have been present to the effect that the rigid
coupling had negligible inertia. Some candidates drew diagrams including this as a
separate inertia.
The use of a bevel-geared connection caused two candidates to think that somehow the
perpendicular axes theorem was appropriate.
The average mark for this question was very low (at 4.8/20) compared with the true
difficulty of the question if the interpretation was clear.
The attempts made to compute the angular velocity of link BC were generally
reasonable. The subsequent calculations for the angular acceleration of BC were very
poor.
In part (b) of this question, a surprising number candidates attempting it ignored the
forces (torque) associated with the acceleration of the link BC and solved only a static
problem.
There appeared to be no awareness of the fact that the torque on the disc should be
assembled as the sum of two parts: one due to the acceleration of BC and the other due
to the static forces at point C.
Marks were not deducted for responses which included this gravitational force although
this was not the intention with the question.
Q4
Part (a) was simple for those candidates who recalled the formulae for " n , " d , ! and Q.
An approximate value for Q was perfectly acceptable (given that the damping ratio, ! ,
was low). A small number of candidates invested effort in seeking an exact value. The
question was perhaps slightly flawed in that it did not specifically observe that an
approximate value was acceptable.
All parts of this question required the capability to determine the magnitude of a complex
number and future candidates are well advised to ensure that they have this. A number
of candidates did not have this capability.
Parts (b) and part (c) required the appreciation that the force transmitted to the ship deck
is the sum of two contributions: stiffness x deflection + damping x velocity. Again, some
candidates were unable to recognize this and this is a basic failing which cannot be
condoned.
Part (c) required the candidates to choose a “tuned-frequency” for the absorber. This
part of the question relied on some intuition to realize that a good choice for this “tuned-
frequency” would be 30.5 rad/s.
Two candidates erred by misreading the frequencies given as Hz rather than rad/s.
All papers in the past 4 years have contained a question based solely (or at least mainly)
on the gyroscopic couple associated with the rotation of the axis of spin of an axi-
symmetric rotor.
Evidently, most candidates find the concepts of gyroscopic couples prohibitively hard
since only 4 people attempted this question.
The marks for this question were very bi-modal (one person got 0 and three others
scored above (14/20). The average mark for the question was 12.75 though clearly the
sample is rather too small to be statistically significant.
Q6
Most candidates who attempted this question began by solving (or attempting to solve)
the eigenvalue problem eig(K,M). This was not the intention of the question and in
retrospect, it would have been advisable to have provided more explicit guidance to the
candidates that solution for the two eigenvalues was not required.
A point for future candidates to note is that if v is an eigenvector of the system described
by stiffness matrix, K, and mass matrix, M, then Kv = MvA.
Part (b) was significantly easier to address if the candidates recognized that the equation
for the couple could be used to express Q as a function of x and then substitute this into
the force equation.
The responses to this question were much worse than expected. The picture in this case
possibly did not help. The candidates attempting this question failed completely to
understand that the “static” problem means that Zf = 0 (i.e. the mass matrix does not
contribute to dynamic stiffness). I believe that if/when those candidates having attempted
the question see the solution, they will be very surprised at how simple the question was.
Q7
This question asked the candidates to use “Rayleighs Method”. Implicitly, they almost all
took this to mean that integrals for T and V should be generated
du ( x )
1 2 1 2
1 1
V = EI .dx T = m (u ( x )) .dx
2 0 dx 2 0
Marks were given for candidates who supplied these formulae (using whatever symbols)
but the intention of the question was not to use these directly.
In fact, the strain energy was intended to be found from knowledge of the gravity force
and the deflection at the overhung disc and the kinetic energy was intended to be
computed (in the first instance) based only on the translation and rotation of the
overhung disc.
Q8
Q8 had 3 parts. Part (a) was a “power-balance” question. Of the few who attempted it,
most got this right.
Part (b) of Q8 evidently had excessive description before it. Two candidates drew
diagrams to represent this description and in retrospect, it would have been better to
have provided the diagram in place of the lengthy wording. Part (b) required an
assumption of some sort. You could not produce an exact answer given the information.
The assumption related to the acceleration of the flywheel between U = 1400 and U =
1700. With this assumption, it was clear that (f140 + f170)/2 = 5 revolutions per second.
The remaining information was available from considering an equation for net work done
on the flywheel either during the remaining 3300 of angle or in the punching 300. A few
candidates were able to produce the latter equation but no-one produced the former.
Part (c) involved “referring” a mass to a rotating inertia (the flywheel) or the other way
around. Nobody who attempted the question could make any sense of this. This is
consistent with the very poor performance in Q1 on referring inertias across a gear
connection.
Q9
Clearly, this question was extremely difficult for the candidates to understand —
although the actual workings were very simple. It was intended as an opportunity for
those candidates with a sound intuitive grasp of mechanics but limited experience of the
“mechanistic” questions to grab marks. Nobody made any sense of it. Two candidates
tried to interpret it as a “torsional vibration” problem involving 3 inertias in a “chain-type”
arrangement. Some compensation marks were given for this but only very few.
General Comments:
This paper contained 11 questions with candidates selecting 5 from 11. All questions
had been attempted by more than one candidate. Comments follow about each question
with a summary at the end.
Q1
The first part of this was a description of two processes that are well documented
although uncommon. The second part was more challenging. In this question the key
was to see that equal amounts of work hardening mean equal amounts of strain and hot
just equal changes in bar diameter. True strain should have been used and a forward
Q2
This again started off with a description of probably more common processes than the
first question, however it did ask for the differences in the two that would affect their
areas of application. Transfer moulding can in general work with thicker sections as the
polymer is heated to curing temperature by being forced through small holes that heat it
through faster than the compression- moulding die.
The numeric part is about balancing the forces from injection with those trying to open
the die (i.e. pressure of the material internally). This leads to a maximum projected area
of 19.635mm2.
Q3
This question focussed on composites. The first two parts were looking for short answers
on how composites function and some examples. The numeric part centres around the
rule of mixtures which leads to a glass proportion by volume of 26% in the composite.
The proportion by weight is then a density calculation leading to 62.4% and an overall
density of 1.5.
The imposed load on the fibres comes from the fact that there is equal strain on each
and with the fibres having a higher modulus they carry 10.24 times the load the matrix
carries.
Q4
The first part of this was looking for the metallurgical condition of the material. This
should have been an solution treated and artificially aged condition. Welding of parts
together would overage the material and they would need to be re heat-treated. Some
high energy welding methods may be helpful. Adhesive bonding would at least require
component redesign and probably also temperature affects may still be a problem.
Q5
The key to this question was the formation on the surface of a material machined using
EDM a white layer and a heat affected zone. This has little affect on the static
mechanical properties (those of a tensile test but has a large effect on fatigue).
Q6
The first part looked at common features seen in many casting dies/moulds. The first
part of the numerical calculation was a matter of substitution in the given equation while
the latter part was more challenging. In the latter part the candidate has to recognise that
while some gas will be evolved into the aluminium there will also be some still dissolved.
Then once the volume of gas is known then amount (volume) of porosity can be
calculated, this is 0.31%
Q7
The first part looked for reasons for heat treatment (not heating) during manufacture.
The second part looked heat treatment specification. The first part asked to criticise a
given specification. Criticisms here would range from too low an austenising
temperature, too short a time at temperature, too severe a quench. Very low tempering
temperature and an unexplained second temper.
Q8
The first part was descriptive of three processes at the forefront of coating technologies.
The calculation relied on knowledge of the relation between charge current and valency.
A well-documented equation exists but it should not be needed as it can easily be
developed from basic knowledge. Effectively each Iron Ion needs 3 electrons and this
along with current flow and the faraday constant gives a total time of 5.45 hours.
Q9
The two different methods of tolerancing presented here are very similar. With 100%
interchangeability required the tolerance range is effectively split between the shaft and
its bearing. Giving the shaft 37.975 to 39.92, and the bearing 38.025 to 38.08. It should
be noted that the smallest bearing and largest shaft combination along with the vice
versa do not conflict with requirements.
The statistical tolerancing instead of using an absolute tolerance band uses a three-
sigma relationship, which leads to the standard deviation having to be summed. It should
be noted in this case that the square of the sum is the sum f the square of the two
individual values. This will give shaft 37.965 +/- 0.04802 and the bearing 38.055 +/-
0.04802.
Q10
This question on economics required the time to machine one component to be worked
out in each case and the time for regrinding / tool changing. Cost for the high speed
steel should have been £20.67 and for the ceramic £3.79
Q11
This question on formability required some descriptions of common processes for the
first part. The second numerical part required the calculation of strains for each position.
With this complete they needed to be plotted onto the forming limit diagram, values
above the curve are failure criteria. Failure should occur at possibly 3 and 10.
Generally
At least one candidate produced a good answer for each question. Descriptive parts
would always be improved by appropriate use of diagrams and numeric parts with better
descriptions of the individual stages of calculation. The latter helps when mistakes are
made in the maths.
General comments:
The results for 2003 indicate that typically students do not provide broad coverage of the
syllabus and that certain topics are completed omitted. Specifically the following
comments are offered in relation to the 2003 examination.
Q1
The question deals with one of the most widely applied digital position measurement
techniques based around the incremental encoder. This did not attract a single attempt
and there is a clear need for effort in this topic and the associated basic digital logic
methods. The encoder also forms the basis of velocity measurement and again such a
widely used method requires attention.
Q2
A straight-forward problem requiring the interpretation of a linear system model. This
received a large number of attempts with limited success. It is clear that most students
do not know how to apply the simple equations given for characterising the transient
response.
Answers:
0.77
(a)
5 + 0.5s + 0.77
2
(b) Ts = 15.75
Q3
A basic sampled data question that received limited attention. There is a need to cover
the basics of the technique such as establishing ‘Z’ transform system model and
mathematical manipulations.
Q4
The state space question was provided as an essentially mathematical problem. This
received attention from the majority of candidates. Within the restrictions of an
examination the scope of the question is indicative of coverage of the topic required by
candidates.
Answer:
K = 282
K2 = 13000
Q5 and Q8
Both questions deal with the application of PID control. While the mathematical
manipulation was successfully handled there is a clear need for effort in establishing a
block diagram, using final value theorem and carrying out comparisons with the ITAE
polynomial. As in many aspects of problems candidates have often given little thought to
how a block diagram is obtained from a system schematic. There is a clear need to
understand the process of tuning a PID controller
Q6
A classical linear server system problem requiring the candidates to manipulate an open
loop function on the Nichols diagram to achieve a specific peak closed loop magnitude
value. Candidates have a limited appreciation that omits understanding of how the open
loop and closed loop are linked on the Nichols diagram. This is clearly a topic that
requires wider coverage.
Q9
Only a limited number of attempts for a problem that deals with compensation of a linear
system. This is widely used basic technique calling for the determination of the fain for a
specific error (K = 100) followed by the sizing of a lead-lag compensator. Effort is
needed in understanding what the various compensation functions mean in terms of
physical effect. Reading around the topic is needed.
Q10
A root locus problem requiring gain calculation for specific damping ratio (K = 4.2). The
technique of interpreting the dominant pole pair is clearly not understood with very few
candidates attempting the question. Again there is a clear need to read around the topic
to gain an understanding of the physical interpretation of applications using the root
locus method.
Q11
A basic linear servo problem with few attempts being made. The problem essentially
required little more than obtaining a transfer function, calculating the steady state error
(100/288) and applying integral action.
Q12
A basic strain gauge instrumentation problem for which there were few attempts and no
successes. The problem and solutions are of a standard type covered in most
instrumentation text books. Candidates need to cover basic techniques of measurement,
e.g. temperature, force, pressure, fluid flow, displacement, speed.
General Comments:
Due to the low number of candidates comments on marking is not supplied, only the
answer pointers are given.
Q1
Part (a) (i) and (ii) The most appropriate development life cycle would be 'incremental
development or prototyping'. Here the implementation would involve phasing in new
functionality and testing each phase before progressing to the next stage. The main
justification in using this type of development life cycle is that new functionality could be
tested in parallel alongside the existing system to minimise disruption. It would be useful
to discuss how the design of the database is influenced by this approach - given the
number of and size of tables - for example only transaction tables would be re-designed
to reflect the new functionality.
Part (b) Candidates should notice that the transaction sequence could easily check a
simple business rule that looks up the loan limit for a particular subscriber with a
particular subscriber type. Failure would mean a simple alert to the librarian (an
exception message) with the possibility of continuing the loan process if the user decides
to return an existing loan for instance.
Each transaction is modelled on a STD (state transition diagram) with pre-conditions and
post-conditions. Pre and post conditions influence subsequent state changes that may
occur. Executing a transaction results in two outcomes, it either succeeds or fail. If it
succeeds the further dependent transactions are handled in sequence. A transaction
fails, in the context of the question, because of a violation of an integrity constraint or
business rule (i.e. loan limit exceeded), raising an exception in the form of a message as
a result. The new state following an exception must show how the system recovers and
alerts the user. Therefore any series of transactions can be modelled mainly to test a
valid sequence and how integrity constraints would be handled against a live database.
Q2
Part (a) As far as representational power, an Object or Entity more naturally models real
world concepts and processes.
Part (b) UML supports OO modelling in which Entity Types equate to Classes. This is a
level of abstraction that conveniently ignores the detailed logical representation as
relations (from Entity Types) and program code (from Class definitions).
Q3
Part (a) (i) (ii) and (iii) The existing table holds a single record for each CD irrespective of
the number of copies that are held on the catalogue. The number of copies cannot be
used as an active field to depict that more than one copy is available for loan because it
is used for another purpose (In effect a stock check). The solution is to create a new
record for each copy with a new composite key (one value holding a copy number the
other the original catalogue number).
A further table must reflect the loan status and therefore record of the
borrower/subscriber and the copy number/catalogue number of the CD. Another table
would need to discriminate access rights and thus status of users. This table is used to
quickly look up users as these users will only hold valid subscribers.
The three tables are interrelated through primary key and foreign key relationships.
BCNF states 'Every Determinant is a candidate identifier'. Candidates are expected to
draw a functional dependency diagram and apply the BCNF rule.
Part (b) In a way part (a) helps to answer this part. The transaction table is present in the
current system so that audit trails can be made, mainly for accounting purposes. With
the use of a DBMS server a transaction log would record transactions and the log can be
recovered and maintained independent of the physical tables. Thus having a large
transaction table is both redundant and difficult to maintain. In fact the volatility of this
table will cause performance problems as the table will soon become very large. One
solution is to provide separate transaction tables for active loans and fine payment,
reservations, borrower subscriptions and renewals. In addition a further set of tables
would hold historical data for transactions in previous years.
Q4
Part (a) Raw data is regarded as the base from which information can be gathered
summarised and reported. In many organisations the data is hidden and often dispersed.
Thus one aspect of being data rich is the realisation that unless this data can be collated
and be organised an organisation will be information poor. Information should give
context and timeliness. Also an information rich organisation would use 'data mining'
tools assist in decision support and provide accurate cost controls on business
processes for example.
Part (b) The first part should discuss access to a centralised database or a publication
database that collates distributed data from remote sources. A DBMS if properly
designed will support most of the issues given above. Brief discussion on additional 3rd
party tools should expand the following:
• Data Mining tools
• OLAP (on-line analytical processing)
Examples should indicate the business process requirements mentioned in the Case
Study, for example :
A Crawford super user would want to check usage patterns (say over a particular period)
to plan librarian duty.
A cost accounting system may extract data to determine subscription levels again
according to risk and usage patterns. The new web site may be able to better tune
advertising to certain user groups.
Q5
Part (a) n-tier client server normally refers to ‘n’ representing further functional levels
than the more familiar client-server architecture where n=2. Thus for n-tier, n =3
typically. A diagram would indicate the function and interaction between each tier, thus:
client tier - called a thin client where the function is provided a web browser interface
which connects to an application or middle tier running on a remote server. There will be
many clients accessing the middle tier. The connection protocol is HTTP. The final tier is
the database or data tier, a central repository of an organisations data and business
rules. The database receives requests from the database for data from the middle tier.
The standard communication language between and within tiers is XML.
The most generic application programming technique is CGI and the main point that
must be addressed is how a CGI program runs entirely on an application server
communicating with a database (through SQL calls) and returns data back to the client
as rendered HTML.
Therefore a diagram should be used to illustrate the steps involved in processing a CGI
program. This should show the sequence of events, starting from when a URI is issued
from a web browser to the reply being received from the application server.
Q6
Part (a) SQL code
on the data given this will return cd 1240 as an item on loan by subscriberID=77
and reserved by SubscriberID=21
Part (b) Although there is only one SQL query, the RA (relational algebra) statements
can be processed in many different ways. However the allocation of marks depends on
how efficient and how functional are the RA statements that are used to express the
Part (c) Indexes have two main forms - clustered and non-clustered.
Both clustered and non-clustered indexes have two types of nodes: leaf nodes and root
nodes. The leaf level of the clustered index is the data itself. The leaf level of a non-
clustered index is a pointer to the root level of the clustered index.
The clustered index key - that is, the column(s) on which the index is built - determines
how data is physically ordered. So if you build a clustered index on state columns of the
authors table in the Pubs database, the data will be ordered based on the values of state
- in either ascending or descending order.
A clustered index represents the index for the primary key and can only be used once.
Thus a non-clustered index is more widely used in particular where:
The index has good selectivity (above 95%).
There are small ranges of data (not large ranges). Clustered indexes perform better for
large range queries.
Both the WHERE clause and the ORDER BY clause are specified for the same column
in a query. This way, the non-clustered index helps to speed up accessing the records,
and it also speeds up the sorting of the records (because the returned data is already
sorted).
DBMS vendors such as Oracle, SQLServer provide SQL commands to create and drop
indexes. Further mechanisms such as system stored procedures manage the
maintenance and tuning of indexes.;for example, a tool called DBCC in SQLServer.
Candidates should demonstrate a feel for the broad range of tools that go beyond
standard SQL support.
Q7
Part (a) One of the stated advantages of the proposed new web-based implementation
is to improve accessibility. The improvements in accessibility would include the
capability of users to reserve CDs and browse the contents of the CD catalogue. These
user functions represent a service and thus users may be encouraged to reserve items
on-line rather than tie up librarians. Users will need to be aware of the multi-access
security and performance issues.
Part (b) Designing a web site is much more restrictive than for example Windows based
interfaces. There are certain details that candidates must take into account, such as
browser differences, content management, state-less nature of the HTTP. Candidates
Consistency. Standardise look and feel and establish a consistent navigation scheme.
Efficiency Make the interaction as short and as productive as possible. E.g. Loan
Processing - only show CDs that are available for loan and meet pre-selection criteria.
Support multiple users and multiple purposes. Particularly important in the case
study as different users have different views and access rights. One truism concerning
user interface design is that there is no single user. Therefore, any design must account
for the diversity of users that may change over time.
Usability and Familiarity: For example librarians will need to use familiar shortcuts that
would be available on the old system to get directly to the database for example.
However only introduce a new way of working when it is clearly better. The familiar way
is not always the most efficient or effective. If a new metaphor is clearly better, introduce
it and encourage users to learn it.
Q8
The report should be well structured and well argued. The main points being :
• background to the problem. State who the target reader/consumer will be. (This
was not stated in the question).
• objectives of the approach being suggested (i.e. smooth transition of operation,
migration of data, minimum disruption, maintain data integrity and raise new
issues (e.g. security).
• staging of the conversion process will cover an outline then some technical detail.
The report should avoid too much technical jargon and vendor bias.
Clearly an external internet provision would require firewalls and login procedures.
• discuss the importance of providing a 24/7 operation with much more automated
processing (e.g. reservations). The risks here are of missing transactions and
failure of connections. Mention of whether users should be able to pay
subscription fees on-line and if so how to ensure secure transactions (e.g. SSL).
• discuss user interface design issues. For example librarians need an interface as
functional as the current one which will not be easy to realise (i.e. Windows GUI -
> Web browser GUI).
Other risks concern the problems of internet access to potentially an insecure database.
These security risks may be accidental (perhaps from existing members) or malicious
through viruses and Trojans. Support for various encryption of passwords and other
sensitive data would be considered. Other areas that should be covered include proper
data integrity and access control maintenance through the use of automated web-
service approaches. Thus candidates should mention another option of the Crawford
application being outsourced to a 3rd party who would be responsible for handling the
issues mentioned above.
General Comments:
2003 is the eighth year this examination has been presented. From 1999 onwards
candidates have had to answer four questions from a choice of nine. Eight candidates
attempted the examination this year (one candidate at a UK centre and the remaining
seven at centres overseas). This year the pass rate is 12.5%. The marking of all scripts
proceeded as anticipated, with no deviations from the marking scheme. Every question
except Q5 (on correctness proofs) was answered by at least one student. The most
popular questions were Q2 (on lifecycle models) and Q9 (on risk management and
activity networks). As in previous years it is evident from the marks profile that the
candidates are not able to answer the questions at the depth required for this Graduate
Diploma examination. The questions on formal specification, logic, correctness proofs
and, rather surprisingly, the SEI Capability Maturity Model, were particularly unpopular.
This year shows the same number of candidates for the examination as for 2002. The
number of candidates declined between 1996 and 1998, picked up again in 1999 and
appears now to have levelled out. The relevant statistics are: 1996 (18), 1997 (13), 1998
(5), 1999 (13), 2000 (5), 2001 (5), 2002 (8), 2003 (8).
Q1
The software process encompasses the activities, techniques and tools used to produce
software. It includes therefore both the technical and managerial aspects of software
production. The Software Engineering Institute Capability Maturity Model (CMM) should
be able to improve the management of that process by inducing change in an
incremental fashion. There are five levels of maturity inherent in the CMM. These are the
initial, repeatable, defined, managed and optimizing levels. An organization may
advance in an evolutionary fashion towards the higher levels of process maturity. Full
details of the characteristics associated with each level may be found in standard
software engineering texts.
The use of the CMM may be inappropriate for the following reasons: the exclusive
emphasis on project management rather than on product development, the exclusion of
any risk analysis and resolution and the lack of definition of the domain of applicability of
the model.
ISO 9000 is a series of five related standards that are applicable to a wide range of
activities, including design, development, production, installation and servicing. ISO
9001 for quality systems is the standard most relevant to software development. ISO
9000-3 provides specific guidelines to assist in applying ISO 9001 to software systems.
In ISO 9000 there is an emphasis on documentation to ensure consistency and
comprehension, on measurement to ensure process improvement and on the
continuous training of software engineers. All of these emphases should improve
software quality.
Standards within the overall software quality assurance process include product
standards (including document standards, documentation standards and coding
standards) and process standards (including definitions of specification, design and
validation processes and a description of the documents which must be generated in the
course of these processes). The former standards apply to the product being developed
and the latter standards define the process that should be followed during software
development. These activities define a framework for achieving software quality
The use of CASE tools can enhance the role played by standards in improving software
quality. Such tools could include: editing tools (text editors), method support tools
(design editors) and documentation tools (page layout programs).
Q2
The initial part of this question concerned the spiral model attributed to Boehm (1988).
The model takes the form of a spiral where each loop in the spiral represents a phase of
the software process. The model is an evolutionary one that couples the iterative nature
of prototyping with aspects of the linear sequential model. Each loop may be divided into
four sectors – objective setting, risk assessment and reduction, development and
validation, planning. Prototyping is used as a risk reduction mechanism. There is explicit
consideration of risk within the model. Risks are attempted to be resolved by initiating
actions to discover information that reduces uncertainty. However the model requires
considerable risk assessment expertise and relies on this expertise for success. If a
major risk is not uncovered and managed problems will follow. It is also necessary for
the project to be divided into appropriate phases by management. If this management
and risk assessment expertise is not present the use of the spiral model is inadvisable.
Similarly the use is inadvisable if the cost of the risk analysis is comparable to the overall
project cost. It is also inapplicable to the development of software for an external client,
because if risk analysis suggests the project should be terminated this could lead to a
breach of contract. Further details of the model may be found in standard software
engineering texts and in Boehm’s original paper. Although this was a popular question
the answers provided were lacking in detail, which was reflected in the marks awarded.
Q3
Two different approaches to formal specification could be those of algebraic specification
and model-based specification. A model-based language enables the building of a
model of the system using mathematical constructs such as sets and sequences and
system operations are defined by how they modify the state of the system. Examples of
model-based languages include Z, VDM, B and CSP. An algebraic-based language
permits the specification of a system in terms of operations and their relationships.
Examples of algebraic-based languages include Larch, OBJ and Lotos. The use of
formal specification languages is particularly appropriate in the design of safety-critical
systems. Algebraic techniques are well suited to specifying interfaces where object
operations are independent of the object state.
An argument against the use of formal specifications could be that formal specifications
can be difficult and tedious to write, thereby allowing the introduction of errors. In favour
it could be argued that the use of formal specifications aids precision and removes
ambiguities.
The second part of the question required the construction of specifications for the
operations described in the given scenario. This was a straightforward process; all of
the operations involved standard manipulations that should have been familiar to both
candidates. Data structures, relationships and invariants should have been defined and
hence the required specification generated. Candidates were asked to use a formal
specification language of their choice; it was expected, but not required, that they would
use either VDM or Z. Neither candidate was able to write anything approaching a
correct answer for this part of the question.
A static analyser analyses the source text to discover anomalies such as uninitialised
variables, unreachable code, unused variables and so on. It may include a number of
stages such as control flow analysis, data use analysis, interface analysis, information
flow analysis and path analysis. A dynamic analyser provides information on how often
each statement in a program has been executed. It may also provide information on
branches, loops and processor usage. The two principal uses of dynamic analysers are
test coverage assessment and program optimisation. Thus static and dynamic
analysers can be considered as complementary tools used within the software testing
process.
The distinctive elements of walkthroughs and inspections that candidates were expected
to identify are summarised below. An inspection may be considered as document
reading with the intention of finding defects. It is a group-based activity with a well-
defined structure using a small team consisting of four members. The four members
could be a moderator, a designer, an implementer and a tester. The key stages that
should have been identified are: planning (team selection, logistics of meeting), overview
(presenting a general description of the material to be presented to the review team),
individual preparation (each team member studies the code and its specification),
program inspection (concerned with identifying faults but not suggesting how these faults
should be corrected or recommending changes to other components), rework (author
resolves all faults and problems), follow-up/re-inspection (to ensure resolution of all
issues and to check for new defects that may have been introduced during the rework).
A walkthrough is a less formal process than an inspection, with fewer stages – namely
preparation followed by team analysis of the documentation. It is also a group-based
activity with four to six members, including a representative from the specification team,
the manager responsible for the specification, a representative of the team who will
perform the next phase of the development and a representative of the software quality
assurance group. Members of the team should be experienced technical staff who will
develop a list of items that are not understood and a second list of items believed to be
incorrect.
Any form being used to summarise the various stages within an inspection process
should include, but not be limited to, the following: project name, module name and
author, moderator name, team members, meeting date and location, meeting type, major
defects, minor defects, distribution list. The text 'Managing the Software Process' by W
Q5
The precondition P of a program fragment S is the predicate that needs to be true before
the execution of S. The postcondition Q is the predicate that is required to be true after
successful execution of S. The weakest precondition is the minimum predicate that if
true before the execution of S will ensure that the postcondition Q is true.
The next part of the question required candidates to demonstrate knowledge either of
the use of Dijkstra’s predicate transformers or of the use of Hoare’s inference rules in the
establishment of the correctness of software. This material may be found in standard
texts.
Proof by induction normally involves a predicate of the form P(n) where n is a natural
number. Firstly P(0) is shown to be true – this is the basis step; subsequently it is shown
that if P(n) is true then so is P(n+1) – this is the inductive step. Finally by the principle of
induction it may be concluded that P(n) is true for all n. Induction is thus used to
determine relationships between predicates, hence assisting in the determination of
software correctness.
The body S of a while B do S construct may be executed zero times, once, twice and so
on, depending on the value of B after each execution of S. If Q is the postcondition then
the weakest precondition (wp) of the while B do S construct may be written in terms of
H0 = Q ^ ~B and Hi = B ^ wp(S, Hi-1). A loop invariant INV is a predicate about a loop
that is always true whenever control reaches the predicate; such an invariant
summarises the loop behaviour. INV is true after any number of iterations of S. It may
be shown that Q is true if and when the loop terminates, assuming INV is initially true.
Thus the loop invariant forms an approximation to the weakest precondition of a while B
do S construct. To show termination requires an expression in terms of the program
variables that firstly takes on values from a well-founded set and secondly decreases on
each loop iteration.
Q6
For the first part of this question two out of the three candidates omitted all discussion of
structural and functional testing. Candidates should have been aware that with functional
(black-box) testing the tests are derived from the system’s specification and that with
structural (white-box) testing the tests are derived from a knowledge of the structure of
the software. In functional testing inputs and outputs only are considered – examples of
functional techniques are equivalence partitioning and boundary value analysis. In
structural testing path coverage may be determined and basis path analysis performed.
Both functional and structural testing methods are necessary as they complement each
other, testing different parts of the system development (i.e. at the module, sub-system
and system levels). No one testing technique covers the range of tests required.
In the second part of this question the cyclomatic complexity of the given procedure
should have been established as 5 (note that the Boolean controlling the while loop is a
conjunction of two terms). None of the three candidates constructed a flowgraph to aid
their determination of the cyclomatic complexity of the given algorithm. Given a
cyclomatic complexity of 5 five test cases are required to exercise all independent paths
within the graph. In addition functional tests could be constructed although this was not
expected of the candidates.
Issues that militate against a reduction in the need for testing of software developed in
an object-oriented design paradigm include
a consequence of information hiding is that many methods have relatively few lines of
code and do not return a value to the user but change the state of the object; thus to test
that the change of state has been correctly performed it is necessary to send additional
messages to the object
inherited methods, polymorphism and dynamic binding also introduce additional levels of
complexity into the testing process
Q7
A real-time system is one where the correct functioning of the system is dependent both
on results produced by the system and on the time at which those results are produced.
Such a system therefore needs to respond to stimuli occurring at different times, so the
system architecture must be organized to transfer control as soon as a stimulus is
received. The system is thus designed as a set of concurrent co-operating processes
managed by a real-time executive (see below).
Candidates should have recognised that the design of real-time systems requires
cognisance to be taken of the following phenomena: interrupt handling and context
switching, response times, data transfer rate and throughput, resource allocation and
priority handling, task synchronisation and inter-task communication, multi-tasking and
asynchronous communication. The functioning of a real-time system is dependent on the
system responding to events occurring at irregular intervals within a specified timescale.
An appropriate real-time design technique such as state machine modelling should have
been used to construct a representation of the air traffic control system described in the
question. State/description and stimuli/description tables should have been determined
and the corresponding state machine constructed. Three out of the four candidates
attempting this question did not try to construct a real-time representation of the air-traffic
control system.
Q8
Software re-engineering is concerned with re-implementing legacy systems to make
them more maintainable and easier to understand. The overall functionality of such re-
engineered systems should remain unchanged and normally the system architecture
also remains the same. Justification of re-engineering may focus on the reduced risk to
the organization if software is not completely redesigned and the reduced cost, since re-
engineering is significantly less expensive than the cost of developing new software.
There are also so many legacy systems in existence that complete replacement is not
viable. One limitation of the re-engineering process is that because the software
architecture is not updated it makes distributing centralized systems difficult; a second
limitation is that it is difficult radically to change from a non-object-oriented programming
language to an object-oriented language; a third limitation is that inherent limitations in
the system persist because the software functionality remains unchanged.
Activities associated with the re-engineering process include source code translation,
reverse engineering, program structure improvement, program modularization and data
re-engineering. Further details may be found in standard software engineering texts.
The second part of this question required the development of an object-oriented model
corresponding to a given scenario. Initially a relationship between the software and the
external environment should have been established, perhaps by the use of use-cases.
Classes involved in the system should have been identified. A noun-verb analysis of
requirements provides, via the nouns, candidates for classes and attributes whilst the
verbs are candidates for methods. For each class identified all relevant attributes and
Q9
Stages in the risk management process include (i) risk identification, (ii) risk analysis, (iii)
planning to avoid or minimize the effects of risks and (iv) monitoring of risk, where plans
are updated as more information becomes available. Risks may be considered to fall
into categories of project risks that affect the schedule and resources, product risks that
affect the quality or performance of the developed system, and business risks that affect
the organization developing or procuring the software. Strategies that may be employed
to deal with risks include avoidance strategies, minimization strategies and contingency
plans. Further details may be found in standard software engineering texts. Three out
of the six candidates did not attempt this part of the question and the remaining three
provided answers that were relatively superficial.
The second part of this question required the determination, by any appropriate means,
of an estimate of the minimum time required to complete a phase in a software
development project. By means of a critical path analysis, or otherwise, the minimum
time should have been calculated to be 90 days. Pessimistic (115 days) and optimistic
(65 days) estimates should also have been determined. An activity bar chart showing
the interrelationships between tasks in the phase and the schedule for phase
completion, based on the pessimistic prediction, should have been produced. Three out
of the six candidates were able to complete, relatively successfully, this part of the
question.
Selected References
General Comments:
Note: Due to the low number of candidates comments on marking is not supplied, only
the answer pointers are given.
Q1
It is important that candidates can identify and differentiate the hardware components
and software components that are common on both architectures. The software
components can be abstractions of the virtual machines that these architectures host
such as: NetworkOS, Data Management / Object Manager/broker / AI games processor/
Presentation Manager/GUI Hardware components also includes physical devices and
firmware; e.g. Games Console, Device Ports/ Servers and Network hardware (e.g.
Repeaters)
Parts (a) and (b) A diagram could be used here to show how a Peer to Peer network
would handle the distribution of a game state. For example each peer would have to
take on different roles, each coordinating the distribution of command and control.
Candidates should appreciate the differences between Client-Server (C-S) and Peer to
Peer (P2P) architectures in the way that they coordinate the distribution of processes
and components.
C-S uses a centralised server and delegates processing to clients according to a broad
range of functions. For example, clients could have the job of maintaining a presentation
manager and this requires the client to render the graphics and process the spatial
attributes specifically for each client. The most important aspect is that the server(s) has
to control and maintain integrity of the game state in real time on the server. This can
lead to delays as the distribution of the game state to each client can cause significant
network load. For efficiency client-server connections need appropriate ‘middleware’
that nowadays is implemented in abstract terms as an Object Manager/Broker. CORBA
is one standard adopted and the philosophy behind CORBA should be mentioned briefly
(i.e. allows disparate hardware/software on the client side to communicate to the remote
objects). This architecture works well so long as the server connection is not lost. In fact
many games ‘disguise’ a loss of service by running a ‘makeover’ which predicts a game
state in the future or may simply build in redundancy (i.e. the weapon was deliberately
made to fail or the movement was made to look sluggish).
For Peer to Peer – there is no distinction between client/server and thus each Peer
needs to duplicate the necessary software and hardware on individual machines. Unlike
C-S each Peer would need to be connected to every other peer when the game is
running. Thus the games software would need to be duplicated on each peer. In fact this
is how real combat would occur, that is, completely autonomous connections between
Candidates need to express how this extra autonomy is paid for in terms of extra
complexity in the network protocol compared to client server and the associated
problems of distributing knowledge to other on-line players.
Q2
Part (a) The unix operating system is the simplest system to discuss, though it is
possible because of the broad definition of ‘inter-process’, to interpret the question as
referring to either of the following :
The simplest and perhaps the most familiar being the area of system programming and
unix include command line programs such as Fork, Pipe and grep.
Built-in operating system processes that run in the unix kernel include:
Semaphores (these allow shared data between processes to be protected from
corruption)
Queues: which act as a container for temporary data which concurrent process use to
simultaneously store and read data.
Part (b) Threads and Processes in operating systems tend to get mixed up so
candidates could use this analogy to distinguish them:
Threads are a type of computation that has unrestricted access to shared memory.
Threads handle concurrency within programs.
Thus threads are a more convenient way of handling concurrency via the operating
system rather than by hardware or by a programming language. Unix tends to support
language level support for inter-process communication rather than thread (intra-
process) communication.
Part (c) The main TWO de-facto programming standards that support 'inter-process'
communication between computers that are 'distributed geographically' and therefore
communicate remotely are: CORBA and RMI
These two are the main standards commonly mentioned in textbooks and candidates
should describe the main components.
Candidates should also explain how a ‘common’ framework is achieved using object
oriented technology and a common IDL (Interface definition language).
More recent standards are .NET and DCOM, these are built on having a common run
time language which runs on every connected machine.
Once an IDL is defined for a software component legacy code can be processed. This is
why many organisations use facilities such as CORBA as legacy code is too costly to be
rewritten but still needs deploying on a variety of platforms.
Forecast is an object which has details of today’s forecast. As the object will be copied
across a network, it must implement the Serializable interface. It indicates that the object
can be converted into a byte stream and consumed by another process.
Q3
Part (a) The MPG exhibits the characteristics of the following types of computer
systems:
Real Time System emphasised by the word ‘soft’ as opposed to ‘hard’ which should be
characterised by the following:
the need to handle external events and process these in real time which are not safety
critical.
the need to schedule concurrent processes that occur randomly with differing priorities.
Programming – Real time (RT) Java/Ada.
Part (b) The characteristics of a Real Time System that lead it to be described as 'Safety
Critical’ include:
The importance of meeting deadlines according to strict rules and protocols.
The protection of memory and
Prevention of corrupt data affecting the integrity for example during sensor input.
High integrity user interface command and control preventing users from overriding
essential integrity and environment it controls.
A key observation is that different events have different consistency and latency
requirements. Real-time events have strict timeliness and lax consistency requirements.
At the other end of the spectrum, consistent events have lax timeliness and strict
consistency requirements. The most challenging events are consistent real-time events
which cover Safety Critical systems; these have strict timeliness and strict consistency
requirements.
Real-time events are those events that players must be informed of in real-time in order
to react reflexively. The best example of a real-time event is the move event. Any delay
in the reaction time of one’s own avatar movement inhibits the effectiveness of player
reflexes.
Q4
Part (a) Candidates need to reveal an understanding of basic real time scheduling
concepts and these should underpin this question, the most important of these are:
Pre-emptive scheduling : a low-priority process will switch to a higher priority process
allowing a more reactive response.
Rate monotonic : priorities assigned to processes based on its deadline, the shorter the
period or deadline the higher the priority.
Event : process which cannot have any interactions or forced dependencies.
Deadline is equivalent to the ‘Period’ that is often represented in schedule tables.
Deadline establishes the units of time in which execution must complete before a
process releases CPU time to other processes.
Utilisation based scheduling is assumed not to be an issue, thus making the analysis
simpler.
Taking all this into account the scheme is schedulable as all deadlines are met in the first
deadline Event 3 (9<10). However continuing the cycle using up the spare cycle up to a
deadline for event 4 also confirms the test for schedulability is sufficient but not
necessary as the utilisation is not known.
Part (b) If the MPG runs on a multi-processor system then a single processor scheduling
algorithms is no longer optimal. Also the order of events (necessary for a MPG) can be
scheduled according to integrity constraints quite easily on a single processor. However
a multi-processor system will have no common clock and the delay that occurs in
sending messages between processes means it is impossible to schedule an observed
series of events. As state changes can be viewed as events it is impossible to derive the
same global view of a given subset of the system state.
For the second part, candidates should offer ideas about synchronisation schemes in the
light of the problems that have been discussed. The examiner is looking at
interpreting/deriving a solution rather than recalling a particular scheme that might exist
but unlikely to be suitable for a MPG.
Q5
Part (a) With reference to the MPG games console the hardware components that are
built in will be circuit boards or cards that slot into the backpane of the motherboard
inside the games console. Clearly extensive heat sinks and miniaturisation is involved to
embed the various components in a small amount of space. However various circuit
boards are not too different from those embedded in a laptop computer. The most
essential are
Network Card
CPU with a circuit board that processes I/O and a separate games co-processor (for
number crunching).
Part (b) There are some demanding requirements of MPG graphics, that to some extent
are handled by LCD monitor technology, for example:
Colour balance (gamma settings), dot pitch (number of pixels per inch), distortion of
screen viewed, compatibility with graphics hardware/drivers.
An LCD monitor has a faster way of refreshing the display than a CRT. In essence an
LCD displays a bit map of actual pixels and for this reason images are much sharper.
LCDs do not suffer the distortion effects found in CRTs particularly if viewed at different
angles from directly in front again reflected in better quality for games programming and
graphics.
Part (c) The extra hardware devices that are required to support a touch sensitive LCD
monitor compared to a normal LCD monitor include the use of ‘4-wire resistive touch
screen technology’. Resistive touch screens have a glass or hard plastic surface onto
which a thin, transparent conductive layer has been applied. A fine grid of spacer micro
dots is then applied and another layer of a conductive-coated flexible plastic (usually
Mylar) is laid on top. You end up with two transparent conductive planes of material
separated by a few thousandths of an inch. Pressure from a stylus or finger causes the
two planes to make electrical contact and forms the means of sensing the touch.
In operation, a controller must first apply a voltage across the x plane thereby forming a
gradient because of the resistive coating. A touch is sensed by using the y plane as an
input to an Analogue to Digital Converter (ADC), detecting a voltage when the two
planes are forced together. The ADC reading will vary as a function of the x (right/left)
position of the touch.
The y position is then calculated by removing the voltage from the x plane and applying
it to the y plane, from top to bottom. The x plane is then used as a pickoff and its output
is routed to the ADC. The only other hardware support is the interface to the host
processor and to the LCD display. Interfaces are similar to the PS/2 serial interface used
by a mouse or keyboard.
Q6
Part (a) Three well known methods are :-
Heuristic evaluation.
Cognitive Walkthroughs
Formal usability
Other methods include Inquiry techniques which are more informal and less widely used
but would also be a satisfactory answer.
Dealing with one approach in more detail - Cognitive walkthrough. This is a review
technique where expert evaluators construct task scenarios from a specification or early
prototype and then role play the part of a user working with that interface - "walking
through" the interface. Each step the user would take is scrutinised for impasses, this is
where the interface blocks the "user" from completing the task indicating that the
interface is missing something.
Part (b) Alternatives include joysticks and a games mouse with specialised buttons.
They still provide direct manipulation in the sense that users are actioning virtual controls
and objects using hand-eye coordination. In contrast non-direct manipulation includes
the use of windows controls and possibly command line interfaces. In certain cases
users need to select items from a pull down list to ‘hide’ non-essential information until it
is needed or it is in the correct context. Similarly command line interfaces are
appropriate where access to the operating software is required say for restoring files.
Q7
Part (a) The most obvious type of managed service would be a ‘Web services’ facility,
particularly if the MPG is deployed over the internet. The Web Service would provide a
service to MPG users releasing them from the installation of expensive bulky software on
their machines. MPG users would be subscribers and pay for access to the web service.
They would also automatically receive the latest versions of games. To support this the
IT department would employ developers and programmers and 3rd party software to
manage what could be complex server based systems. The IT department may trade
services with other businesses and benefit from the integrated approach which web
services offer.
Part (b) Answers should address the key challenges to be overcome when these web
services are outsourced.
When the web services are outsourced the IT department of the games company would
simply act as an intermediary and pass all development and deployment for outsourcing.
Creative games designers employed by the games company would still design and code
games but would distribute these as ‘binaries’ to the outsourcing company. Thus the
games company can still maintain its creative/design effort without the worry of servicing
the implementation of its games on its own servers. A cost-benefit analysis may reveal a
cut-off where the decision to outsource is profitable and more convenient. Three major
drawbacks, which candidates should discuss, are:
Too many standards. Unfortunately, the many standards around Web services may
complicate the concept rather than simplify it. The plethora of standards makes
supporting Web services difficult for software vendors. However broader standards, such
as SOAP may help in the future.
Access. Yet another issue is who gets to use which services. The delivery of Web
services also brings with it the need for a new model for payment and creates issues
around authentication and subscription.
Security. Security is a concern for all Web services participants. However, existing
security models, such as SSL and access control lists, as well as emerging technologies
should provide adequate-if somewhat basic - levels of security.
Part (c) One major technique which could be used to control and manage the changes if
outsourcing is Business Process Modelling (BPM). This technique manages the capture
Q8
Part (a) In the context of Software Quality Management and Process :
Reliability is an attribute of any computer-related component (software, hardware, or a
network, for example) that consistently performs according to its specifications
For example the Institute of Electrical and Electronics Engineers IEEE sponsors an
organisation devoted to reliability in engineering, the IEEE Reliability Society (IEEE RS).
Part (b) The term 'standards' is usually defined as "meeting the requirements of the
customer." Associated with a standard is a means of measuring compliance, this is
called Quality assurance and is a systematic process for ensuring quality during the
successive steps in developing a product or service. ISO 9000 is a standard for ensuring
that a company's quality assurance system follows best industry practices.
Part (c) There are different standard interfaces to connect devices such as printers,
scanners, and external zip drives to a computer. In a computer a "port" is an electrical
channel for getting information and commands in and out of a computer. It is the way the
central microprocessor running the computer gets its information and commands to the
various peripheral devices or device controllers, such as video cards, hard disk cards,
etc.
Each device will have a connection port set to interface with a computer via an internal
I/O bus. Candidates should therefore discuss broadly the different needs of device
drivers and the standards they use.
General Comments:
In year 2003 the examination was taken by 68 candidates: 62 overseas and 6 UK. The
pass rate was 50%.
Q1
This question regarded the use of T flip-flops in the design of a modulo-3 up/down
counter. Part (a) was on the topic of T flip-flops and was generally answered well. Part
(b) involved the production of a state diagram, a next state table and an output table for
the counter. As a Mealy finite state machine was requested, the arcs in the state
diagram should have indicated output as well as input. The description could be
considered ambiguous in terms of the output when C=0, so in marking alternative
feasible decisions were accepted. However as the output on arcs reflects current output
rather than output after transition to the next state, the CD=10 arc leaving state 2 and the
CD=11 arc leaving state 0 should have output 1 rather than the CD=10 arc leaving state
1 and the CD=11 arc leaving state 1. Many candidates marked such incorrect arcs with a
1 output. Some candidates did not show output on the arcs. Follow through from the
state diagram was used in marking the next state and output tables. In part (c) marking
follow through was used from the tables presented in part (b). Most candidates coped
well with production of the excitation table and the flip-flop input equations but some
failed to link the flip-flop outputs to the logic for the flip-flop inputs in the final circuit or
failed to include the output Y in the circuit.
Q2
Part (a), on the topic of memory interleaving, was generally poorly addressed. Few
mentioned both fine-grained interleaving (consecutive addresses in consecutive memory
modules) and coarse-grained interleaving (where high-order bits determine the module).
Fine-grained interleaving is useful for example in common bus systems to allow a
processor to request a second read request before first has completed. Coarse-grained
interleaving is useful for example in crossbar systems for different processors to access
Q3
Part (a) involved showing how p- and n- transistors are connected in CMOS to produce
inverter, NAND and NOR gates. Parts (b) to (f) regarded the implementation of f = (a
# b) # c (where # represents Exclusive OR) in a number of different ways involving
different logic gates. Costs and delays were also to be calculated for circuits produced.
Follow through from solutions to Boolean expressions in (b) was used in marking the
circuits in parts (c), (d) and (e). Similarly follow through from the circuits produced was
used in marking the delays and costs. In parts (d) and (e) some did not observe that 3-
input NANDs and NORs were to be used and hence used 4-input gates in places. Part
(f) could be interpreted either as comparison of NANDs against circuits such as those in
part (c), so using a semi-custom approach could reduce manufacturing costs, or it could
be interpreted as a comparison of use of NANDs against use of NORs, where area could
be reduced. Parts (a), (d) and (e) were omitted by several candidates.
Q4
This question involved the production of low-level instruction sequences for the same
calculation using a number of different processor instruction types: 3-address, 2-
address, 1-address and stack-based. Alternative instruction sequences were possible
with follow through used in calculations of number of instructions and number of memory
accesses required. In part (a) (i) and (ii) general CPU registers or additional memory
locations could be utilised but in (iii) only use of the accumulator and additional memory
locations were permitted. Many candidates omitted to consider the memory accesses
required to obtain the instructions themselves when calculating the number of memory
accesses. The instruction sequence, along with stack contents, was generally answered
well in part (b). In part (b) stack may be in main memory or in specialised registers and
the assumption made here would affect the number of memory accesses required.
Q5
This question addressed full-adders, ripple-carry adders and carry-look-ahead
generators along with delays involved in production of carries. In part (a) the carry-out
delay calculation was accepted in both solutions which assumed that other inputs were
available before the carry-in and solutions which assumed that the other inputs were
available at the same time as the carry-in. In part (b) some delay calculations were
incorrect as the incorrect assumption was made that a carry could not ripple through to
the next full adder until the sum output from a full adder was also available. Many who
tackled this question omitted part (c) regarding the carry-look-ahead generator.
Q6
This question was on overflow and parity. It was not generally answered well, with many
candidates addressing only parts of the question and in particular in part (a) ignoring
subtraction and dealing only with addition. Some candidates did not deal with the twos
complement system and instead used unsigned binary. Part (b) was answered better on
the whole although many omitted discussion of the limitations of the use of parity.
Q8
This question regarded the topics of (a) DMA (direct memory access) and (b)
asynchronous communications interfaces (UARTs) for transmission and receipt of serial
data. This was a fairly popular question. Part (a) was answered well on the whole but
some solutions lacked detail of signals or registers involved or confused the order of
events. Part (b) was generally answered less well by many, with some candidates
focusing solely on the presence of start and stop bits and some others describing
communication issues not relevant to the topic of the question.
Q9
This question regarded (a) vectored and non-vectored interrupts and (b) daisy-chaining
and the use of a priority encoder as methods for dealing with priority interrupts. It was a
popular question. Part (a) was generally tackled well. In part (b) many produced a good
diagram showing interconnection of devices for daisy-chaining. However many omitted
to fully describe the relationship between PI, PO and requests from devices or omitted to
produce fully correct truth tables and gate-level circuitry for production of PO and enable
from PI and RF (flip-flop indicating request from device). For the priority encoder
approach diagrams were generally mainly correct but often the accompanying written
descriptions were inaccurate.
General Comments:
This was the eleventh examination sat under the revised Aims and Scope and all
questions have been set and marked by the same team of examiners. This has allowed
continuity and consistency in the monitoring of standards over the period.
There was a major concern, expressed in many recent reports, about the powers of
expression and knowledge of the English language held by the majority of candidates
using English as a second language. Some candidates were unable to spell basic
words, or even repeat words contained in the questions themselves such as ‘career’,
‘article’ and ‘psychology’. Many found it difficult to construct sentences, which makes
entry for a graduate level, British professional examination rather pointless. This also
As stated every year, poor English and grammar were compounded by very inadequate
presentation from many candidates, although this has shown some improvement this
year. Q2 is a good example. Candidates were asked to write an engineering magazine
article but nearly all failed to provide a title for their article or make any concessions to
magazine format. Can you imagine an article without a title in any magazine in the real
world? Q3 is another good example. A letter was asked for, but only a few candidates
produced anything that looked like the letter that they would actually write in the real
situation. The question was carefully chosen to reflect the sort of activity that could be
expected at this stage of the candidates’ professional life.
Candidates still seem obsessed with filling pages, without any thought for the reader, yet
their answers on communication always state the importance of catering for the reader
and the need for good presentation. Perhaps it is the nature of examinations (and this
year the introduction of new examination booklets) that makes candidates panic and
abandon all the normal rules of the working environment and produce careless and
unstructured responses. It is often difficult to identify the question within the answer
booklets and find where they begin and end. For example, Q1, in three parts, was often
scattered about all over the script with little attempt even to label the sections of the
answer Q1(a), Q1(b) or Q1(c). Some tried elaborate answers embracing all three parts
but these were rarely manageable. This was compounded by large numbers of
‘crossings -out’ which demonstrated that many candidates had made no attempt to plan
their answers before embarking upon the answer. But it must be said that there was
more evidence of planning than in previous years.
Good marks await those candidates who present their work in a professional manner. At
all times candidates should imagine that their work is being performed in an employment
context rather than in an examination. It is disappointing that many candidates resort to
the use of bullet points in a letter or essay question in an attempt to mask poor written
English. The mis-spelling of the title of the paper itself or the mis-spelling of words
printed in the question look really unprofessional and were reflected in the final marks.
Section C on the Organisation of Engineering Activities was disappointing this year with
much to much reliance on rote learning from the texts and far too little considered
thought from wider reading or discussion (see the footnote following this section at the
end of this report).
Section A – Communication
Q1
This popular question repeated the format of previous year’s Q1, but with changes in the
type of communication and the audience. It should be fairly obvious by now that different
types of communication respond to different rules and to the nature of their intended
audience. Candidates have only to scrutinise a range of different documents to prepare
This year the question centred on three very different types of written communication,
minutes, a magazine article and a memorandum, where the audience and purpose was
indicated.
Section (a) concerned the minutes of an engineering project team meeting. Candidates
could have divided their answer among the four given headings of structure, style,
content and presentation, and considered relevant points under each heading. Each
section should then contain a few paragraphs relevant to each heading. How are
minutes structured, what style is used, what do they contain, and how are they
presented? This is a theoretical question and examples can help but it is surprising how
many candidates wrote a fictitious set of minutes without attempting any discussion of
the variations.
Section (b) and section (c) should follow the same pattern. Section (b) was reasonably
well done although some failed to spot the significance of a female target group or
address the nature of a headline that would attract their attention and the contents that
would retain it. Some offered a list of contents for the article, from the invention of the
wheel through to space travel that would require a whole book! Some were good in that
they offered advice and addresses for those wishing to follow up from the article and
many were sensitive to the concerns of this female age group.
Many candidates made a mess of section (c) and spent too much time on the nature of
internal memoranda rather than on the contents.
Q2
This question was set in the expectation that all candidates would be familiar with
professional publications from their respective institutions.
It is a good idea to know what 400 words are going to look like on the page and this can
be easily achieved by working out how many words you write to a line and dividing this
into 400 to get the rough number of lines required for the answer before you begin. Then
a structure can be devised to fit the pages before you commence.
It was expected that candidates would try and produce the required article, by starting
with the given heading ‘Communication Failures Can Cause Major Industrial Accidents’.
They were then expected to proceed with an introduction, stressing the importance of
good communication and the various types and levels of communication found in an
industrial setting. Some good examples (perhaps drawn from the course textbook) could
then be presented to show a range of past failures followed by how these were corrected
and finally some discussion of ways for future improvements in communication across
industry generally.
Some got bogged down in their examples which often had nothing to do with
communication, others offered one example only and few tackled future improvements.
Most ran out of steam or broke down, abandoning the article format for an essay on
industrial accidents. Always remember that this is Section A on ‘Communication’ and
that the task is at least as important as the content.
Q4
There were many disappointing answers this question. Candidates should consult some
of the many major accident reports, often produced by Governments, in order to be able
to answer questions of this type. The Engineering Council’s ‘Guidelines on Risk Issues’
published in 1993 gives a good bibliography of many such reports.
Candidates invariably were unable to give examples from, or name any major accident
reports and some often created extended examples of a report rather than write about
elements of style, structure, order, psychology and presentation. The question was quite
clear in that it called for an explanation of how the above elements are used to produce
high-quality reports, using examples from major reports by way of illustration. It is
probable that weak candidates who had never studied a major accident report seized on
this question without fully understanding it or remembering that it was a Section A
question about the communication of major accidents in reports and not purely about the
causes of accidents.
Read sections from a selection of major accident reports and you will reap many
dividends later in your careers!
(a) In most industrialised nations water supply and water quality have significantly
improved through national and international programmes (e.g. EU initiatives on sewage
treatment, bathing waters and drinking water). However, water is often seen as the
biggest problem in many parts of the developing world. Many sources are beginning to
dry up as weather patterns change. Also water is being diverted for agricultural use and
there is increasing contamination of supplies by pesticides, industrial chemicals and
untreated sewage. There needs to be increased awareness of the importance of clean
water supplies and an increase in the distribution of deep wells. Schemes such as
Water-Aid financed by European water companies need greater promotion.
(c) Problems include the greenhouse effect, acid rain and the depletion of the ozone
layer. These are often seen as problems caused by the highly industrialised countries,
but little attention has been paid to their global impact. Clearly, energy for the developing
world needs to be linked to renewable sources.
(d) Destruction of habit is another very important issue. Examples include the large scale
logging operations in tropical regions and ‘slash and burn’ techniques used in rain
forests to accommodate grazing cattle. The greenhouse effect is likely to cause flooding
of agricultural land and possible changes in the migration patterns of animals on land
and fish at sea. Purchases of forests by companies is helping the situation but it is
alleged that some countries are complicit in allowing the destruction of vital natural
habits to continue.
(e) Corporate accountability has become a major issue in the Western world. Most of the
G8 and other international summit meetings have witnessed protests over the way that
multi-national companies are putting profit before sustainable development.
Governments have been seen as guilty of failing to control these companies although
some companies are now trying to promote greener policies that put people before
profits.
Whilst the Johannesburg Summit highlighted these important issues, the long term
outcomes remain to be seen.
Q6
This was a very popular question with virtually all candidates giving good examples of
large scale engineering incidents - unlike earlier years where trivial or parochial
accidents were offered. Accidents chosen for discussion included; the chemical works at
Bhopal (India) with the release of methyl isocyanide into the surrounding city (most
popular example), the Herald of Free Enterprise ro-ro Channel ferry disaster, the
explosions on the Space Shuttle (both 1986 and 2003); the nuclear reactor incidents at
Three Mile Island (USA) and Chernobyl (then USSR); the King’s Cross Underground
Station fire and the destruction of the chemical works at Flixborough in the UK.
Overall, each of the four sections was approached correctly in terms of the information
required. It was pleasing for the examiners to see that candidates appeared to be well
prepared for a question concerning health and safety issues which are of increasingly
vital concern to professional engineers. Because of the range of different answers the
examiners cannot specify model answers but attention is drawn to the set textbook, The
Professional Engineer in Society, (Collins, Ghey and Mills) for a more detailed account of
the different accidents and to published accident reports of recent incidents since its
publication.
(a) The concept of the ozone layer was generally understood as being comprised of O3.
This stratospheric layer of ozone, which is located above the Earth’s Polar regions,
protects against increased levels of the Sun’s ultra-violet radiation reaching the lower
levels of the atmosphere. It is recognised in recent years that a hole (particularly over
Antarctica) in the ozone layer has occurred due to anthropogenic pollution.
Consequences of the ozone hole include increased human and animal skin cancers and
stunted plant growth.
(b) Confusion often existed over the main atmospheric gases responsible for the
destruction of the ozone layer. Many candidates thought that the ‘greenhouse effect’
and ‘damage to the ozone layer’ were caused by exactly the same types of atmospheric
pollutants. It should be noted that carbon dioxide is not presently implicated in damaging
the ozone layer.
Chlorofluorocarbons (the CFCs) and most other types of molecules containing a halogen
atom e.g. bromine). Sources of CFCs include; aerosol propellants, refrigerants, foam
insulation, industrial cleaning (degreasing) solvents, Halon-containing fire protection
systems, and soil fumigants.
Nitrogen oxides NOx (NO and NO2 including nitrous oxide N2O). Sources include; all
forms of hydrocarbon combustion, particularly high level supersonic flights and rockets,
and nitrogen rich fertilisers.
Better answers also included some of the chemical reactions of CFCs taking place in the
stratosphere with the generation of chlorine monoxide as the key intermediate in the
reaction scheme.
(c) Many candidates again confused the correct answer with the ‘greenhouse effect’.
Reduction of CFCs within the 1987 Montreal Protocol (and subsequent revisions) include
their replacement with less harmful alternative chemicals e.g. removing them from
aerosols and using nitrogen or air as propellants instead. Replacing CFCs in refrigerants
with HCFCs which have less ozone destroying potential. Reduction of NOx from the
combustion of fossil fuels in power stations and in transportation and the consideration
of alternative energies. Discussion continues on ways of implementing these approaches
via national and international agreements. Monitoring and policing issues include the
implications of yet further atmospheric loads through the increasing industrialisation of
developing countries e.g. China is now witnessing the widespread use of domestic
refrigerators.
(a) Nuclear power: Benefits include no emissions of greenhouse gases (e.g. carbon
dioxide, nitrogen oxides, volatile hydrocarbons) and, in addition, no emission of sulphur
oxides, which are implicated in the formation of acid rain. There is virtually an everlasting
supply of energy from fission type reactors. Fossil fuels can be conserved whilst
accidents and pollutant events occurring during the extraction and transportation of fossil
fuels to power stations can be eliminated.
The downside includes danger of accidents such as Chernobyl and Three Mile Island
where the release of radioactive material into the environment threatens unpredictable
long term health effects; the need for long term safe disposal of the different grades of
nuclear waste and the decommissioning of the reactors themselves.
Other increasing concerns are the possible sabotage of reactors and the proliferation of
nuclear materials to unstable regimes.
(b) Solar power: Benefits from being completely renewable, with few pollution or safety
issues. It is limited to areas of good/prolonged sunlight and generally considered a small-
scale option to supplement fossil fuel or nuclear derived energy. On a domestic level
panels can be installed on roofs to heat water. There are also plans to develop solar
cells to provide power (e.g. prototype solar powered automobiles). Much research is
currently being undertaken to improve the size and efficiency of solar powered devices.
(c) Wind power: This is becoming an increasingly popular alternative (wind farms) in
both the UK and parts of Europe. A constant supply of wind is necessary and coastal
zones best suited for this purpose (see July 2003 announcements in UK for placements
in estuaries). It is completely renewable, with no pollution or major safety issues.
However, the problem of visual impact and noise of rotor blades is present. Wind power
is still regarded as small-scale, overall, but has much potential for future development as
their current high cost is reduced and efficiency increased.
(d) Tidal power: This is limited to areas of tidal movement but has huge potential, whilst
being totally renewable, and non polluting. The visual impact and high costs are the
downside coupled with the cost of transmitting power to zones of large population. This
source is still relatively underdeveloped and presents many engineering challenges.
(e) Geothermal power: This is limited to areas with the correct geology which enable the
extraction of hot water (e.g. Southampton in UK). It can never be considered a real long
term solution -only a contributor to the overall energy mix. It remains expensive in terms
of its overall potential, but presents little visual impact and no real pollution problems.
Q10
This was easily the most popular question, probably because candidates realised that it
was a good opportunity for regurgitating rote learning. Many candidates gave quite good
overviews of the Japanese approach to the management of organisations and were able
to gain good enough marks (e.g.10/15) in order to achieve a pass, without even
attempting to answer part (b).
The second part of the question, when it was attempted, was often poorly answered.
Only a few candidates were able to see the bigger picture and explain how global
economic conditions, competition from newly developing countries and structural
problems in the Japanese economy were the root cause of stagnation rather than
management style per se. Some companies recognised the problems caused by the ‘job
for life’ culture when they needed to restructure and downsize.
Q11
This was the least popular question, probably because it required most original thought
and some knowledge of current affairs through wider reading. This did not stop some
candidates from deciding that the answer was Peters and Waterman or federal
organisation structure, and regurgitating all that they had memorised about these topics.
Q12
Few candidates picked up the nuances of the term ‘inspirational’ in this question and
trotted out what they could remember about Adair’s action centred leadership model or
McClelland’s motivation theory. There were some token references to the great man
approach with mentions of Hitler, Churchill and Gandhi.
Generally the answers to this section were disappointing, given what the examiners saw
as an interesting and topical set of questions based on contemporary management
theory and we hope that all studying to be professional engineers will take a greater
interest in management through discussion and wider reading. It really is not good
enough to attempt to fit pre-planned answers, often learned by heart, to questions. No
one should expect to pass this examination using such methods and in the absence of
some understanding of the question, wider knowledge gained through reading and
thoughtful endeavour.