You are on page 1of 21

Research Policy 27 Ž1998.

689–709

A cognitive model of innovation


)
Paul Nightingale
Complex Product System InnoÕation Centre, Science Policy Research Unit, Mantell Building, UniÕersity of Sussex, Brighton, BN1 9RF, UK
Accepted 24 June 1998

Abstract

This paper develops a theoretical framework based on empirical case study research that explains the role of tacit
knowledge in technical change and how scientific knowledge is used in innovation. It develops a theoretical argument that
proposes that science cannot be directly applied to produce technology because science answers the wrong question.
Innovation starts with a desired end result and attempts to find the unknown starting conditions that will achieve it. Scientific
knowledge, by contrast, goes in the opposite direction, from known starting conditions to unknown end results. This
difference in direction is overcome by following tacitly understood traditions of technological knowledge that co-evolve with
technological paradigms, but are themselves outside the realm of science. The paper demonstrates how technologies are
‘constructed socially’ and embody sociological and political conceptions of problems and appropriate solutions, but the
theory maintains a very realist perspective. The Cognitive approach treats knowledge as a capacity that is embodied in the
brain, and embedded in socialised practices, using the metaphor of ‘pattern’. The paper explores why scientific patterns
cannot be perfectly extrapolated for complex, non-trivial technologies and shows why technical change is dependent on
learnt tacit conceptions of similarity that cannot be reduced to information processing. q 1998 Elsevier Science B.V. All
rights reserved.

Keywords: Innovation; Tacit knowledge; Knowledge

1. Introduction
This paper provides a theoretical model that explains how the innovation process moves from an initial, ill
defined conception of a problem, through a series of sub-problems, to a finished technology. The paper
introduces the concept of a ‘technological tradition’ that guides the innovation process, co-evolving with the
technology to provide an implicit tacit understanding of how the technology should function, and how problems
in the innovation process should be solved. In doing so, the paper provides a way to understand the role of
scientific knowledge in innovation. This paper concentrates on the output of science as the tacit ability to
understand and interpret patterns of behaviour in nature. This ability cannot be directly applied to produce
technology, but does play a vital, indirect role in innovation. A role more in-tune with current empirical
understanding in science policy. The information based, ‘market failure’ theories are empirically very weak and
science policy has tended to be driven by empirical investigations that are starkly at odds with the old theory
ŽPavitt, 1996..

)
Tel.: q44-1273-686758; fax: q44-1273-685865

0048-7333r98r$19.00 q 1998 Elsevier Science B.V. All rights reserved.


PII: S 0 0 4 8 - 7 3 3 3 Ž 9 8 . 0 0 0 7 8 - X
690 P. Nightingaler Research Policy 27 (1998) 689–709

The theory stresses the continuing, vital role of tacit knowledge in innovation that was stressed by Nelson
and Winter Ž1982. and re-emphasised by Dosi Ž1988. in their discussions of the key features of technical
change. While much of the recent emphasis on ‘knowledge management’ may be a fashionable ‘fad’, the
industrial management of knowledge is an uncertain ‘black art’ as the exact mechanisms of knowledge use in
innovation are largely unknown. This gap in our understanding has begun to be filled by Vincenti’ Ž1990. study
of knowledge use in engineering. This paper attempts to fill that gap further and links developments in the
philosophy, sociology and psychology of science and technology with empirical findings. While previous
empirical work has shown that tacit knowledge is important, this paper attempts to explain why it is important
and forms an implicit critique of attempts to abstract tacit knowledge away with ‘information processing’
explanations of innovation.
The model consists of three building blocks. The first is a Cognitive theory of knowledge, that considers
knowledge as a capacity to extrapolate patterns. This capacity is both embodied in the biology of a human brain
and embedded in social networks, rather than being an abstract entity like information. Secondly, a simplistic
conception of science as a social practice of mapping and codifying patterns in nature. Part of this involves
exploring how three isomorphic levels of pattern, the mathematical, the physical and the imaginative, all interact
and diverge from each other. Lastly, a discussion of the nature of technology itself, where technology is defined
in terms of ‘artificial function’. This argues that the functions that technologies perform are not intrinsic
properties and are therefore dependent on tacit knowledge.
Once these building blocks are in place, the paper argues that theories that treat the output of science as
‘information that can be directly applied to technical change’ are problematic because science answers the
wrong question. The stylised innovation process starts with an intended end result and then tries to find the
unknown starting conditions that will produce it. 1 But, science can only go in the opposite direction, from
known starting conditions to unknown end results. For example, the starting point of a pharmaceutical
innovation process might be a desire to stop a given disease, but the exact molecule that would stop it is
unknown. But, one cannot start with a desire to stop a disease, simply add some Quantum Mechanics and expect
the specific 3D chemical structure of a solution molecule to drop out. This is the ‘direction argument’.
The paper then proposes an alternative Cognitive model. In this ‘Innovation Cycle’ argument, a process of
proposing, testing and modifying tacitly understood functional solutions is proposed where general, nebulous,
initial problems are resolved into specific, concrete problems. These are then solved by a process of imagining a
functional solution, testing it, and then modifying the initial solution until an acceptable solution is found. The
paper introduces the concept of a technological tradition that guides design towards technical solutions by
providing pathways for the resolution of problems and the generation of solutions. The paper ends by
considering how the model can be extended to provide a more realistic understanding of real-world innovation
processes where innovation takes place among diverse groups of people and how innovation processes
themselves are related to the differences in the intrinsic properties of the technology being produced.

1.1. PreÕious treatments

The linear model 2 is long dead in the Science Policy community, but is still implicit in much of the
lobbying by the scientific community. Its success is more to do with how easily it provides a justification for the
public funding of science and how well it fitted into established theoretical frameworks, than any empirical

1
I fully acknowledge that sometimes innovation is far less rational and far more serendipitous than the process described here. This is a
stylised innovation process which is seeking to draw out theoretical points rather than a description of every historical process. The very
unrealistic nature of the assumption that the output of innovation is either known or fixed will be discussed later.
2
Where the relationship between science and technology was abstracted to a uni-directional flow of information from science into
technology.
P. Nightingaler Research Policy 27 (1998) 689–709 691

validation. Reading the broad sweep of history backwards, from the present to the past, it is common to find a
link from technology to previous science. The historical extent of these links is an empirical matter. What is not
clear is why when we turn and look from the present into the future the linear model falls apart, as it fails to
explain how today’s science can be turned into tomorrow’s technology. The notion that the output of science is
information that can be directly applied to produce technology cannot explain many of the key features of
innovation such as, the importance of tacit knowledge, why so much science is done in industry, why so much
technology seems to be produced without much input from science, why in many instances the technology
comes first and the science that can explain it comes later, why technical production is so localised, and why
different industries have very different ‘scientific’ requirements.
Moreover, it leads to policy problems as it:
Ø exaggerates the importance of the international free rider problem and encourages Žultimately self defeating.
techno-nationalism;
Ø reinforces a constricted view of the practical relevance of basic research by concentrating on direct Žand
more easily measurable. contributions, to the neglect of indirect ones;
Ø concentrates excessively on policies to promote externalities to the neglect of policies to promote the demand
for skills to solve complex technological problems’ ŽPavitt, 1996, p. 12698..
A considerable body of work has moved on from the linear model to produce a far more realistic conception
of innovation ŽNelson and Winter, 1977; Kline and Rosenberg, 1986; Rothwell, 1992, 1993.. These papers have
highlighted the differences in innovation processes by sector ŽPavitt, 1984., the importance of the market
ŽRothwell et al., 1974., end users ŽRothwell et al., 1974; von Hippel, 1988., the extent to which the current
science base can be exploited ŽRosenberg, 1994. the effect of product complexity ŽVincenti, 1990; Miller et al.,
1995; Hobday, 1998. and the specific problems associated with innovation in software ŽBoehm, 1988, 1996;
Brooks, 1995; Brady, 1997..
Following the hugely important work of sociologists of technology ŽBijker et al., 1987; MacKenzie, 1990.
attention has been drawn to how different conceptions of problems and solutions among diverse agents impact
on the innovation process. As David Ž1997. points out the linear model has been rejected for a long time by
social scientists and the scientific community’s continuing appeal to a discredited model leaves them open to
trivial criticism.
This paper aims to complement these new models by explaining the cognitive mechanisms at work. By doing
so it explains how general, often social problems are resolved into specific technological ones as the innovation
process follows technological traditions that co-evolve with technological trajectories. To do this it is necessary
to move away from linear models of the science–technology relationship where the output of science is codified
information that can be directly fed into technology. By treating science, technology and firms in terms of
information processing it is possible to abstract to such an extent that scientists, theories, technologists, learning,
institutions, and basically everything that a few moments thought would suggest might be important to technical
change is ignored. But this abstraction leaves out the subtleties that make linear models of innovation
problematic. Section 1.2 will argue that abstracting knowledge based process like innovation to information
processing is unacceptable because part of what information means is dependent on a background of tacit
knowledge, and this background is embodied.

1.2. Knowledge is embodied

Information based approaches to the output of science ignore the importance of tacit knowledge. This is done
by assuming that the ‘meaning’ of information is somehow contained in it. This is obviously false, as most
technical scientific papers can only be understood by scientists who are well versed in the subject. 3 The tacit

3
For example Stewart Ž1992., p. x, gives an example of mathematical jargon ‘‘ . . . fascinating cohomology intersection form related to the
exceptional Lie algebra E8 . . . from a TOP but non-DIFF 4-manifold by surgery on a Kummer surface’’.
692 P. Nightingaler Research Policy 27 (1998) 689–709

knowledge that enables them to understand science is dependent on the intrinsic biology of the brain. Obvious
embodiment plays a role in knowledge as we can see visible light but not UV for example. But, what is not
intuitive is that our sense of similarity is partly innate. This is simply logical, a sense of similarity is a necessary
pre-condition for learning to even take place, and therefore cannot be initially learnt ŽPinker, 1994, p. 417.. ‘‘A
pigeon that is trained to peck a red circle, will peck a pink circle, or a red ellipse more than a blue square . . . this
‘stimulus generalisation’ happens automatically, without extra training, and it entails an innate ‘similarity
space’; otherwise that animal would generalise to everything or to nothing. These subjective similarity spacings
of stimuli are necessary for learning, so they cannot be learned themselves. Thus, even the behaviourist is
‘cheerfully up to his neck’ in innate similarity determining mechanisms’’ Žibid...
Thus knowledge is dependent on an embodied ability to recognise similarity. Since this implies a brain Žand
tacit knowledge., information approaches have to invoke ‘tacit knowledge’ in order to explain it. But, once we
invoke tacit knowledge the ‘information processing’ drops out of the equation as irrelevant, because the tacit
recognition of patterns explains our ability to understand information. 4 Information therefore cannot be
disembodied, because the sense of what it ‘means’ depends somehow on ‘us’. From a tacit knowledge
perspective, science cannot be described without scientists and as will be shown later innovation is dependent on
this conception of similarity. This problem is made worse as knowledge is not only embodied, it is also
embedded in social networks.

1.3. Knowledge is embedded

A consequence of the embodied nature of knowledge is a return to the social. Once we recognise that
knowing what information ‘means’ is embodied, the next step is to realise that the embodied ‘similarity spaces’
that provide context to information, change as we learn. 5 These changes can be bought about by experimental
trial and error, reading articles, papers, talking to colleagues, going to university, talking in the bar at
conferences and a host of other activities. Interpreting this information requires a huge number of interwoven,
but possible contradictory beliefs about the world, rather than a mental blank slate. Scientists and technologists
have a understanding about how the world works, received as part of a learnt tradition that only very slowly
divorces itself from the wider traditional beliefs of society ŽGould, 1981..
Within science these socialised paradigms are ‘‘universally recognised scientific achievements that provide
guidance to define scientific puzzles and to provide clues for their solution by a community of practitioners’’
ŽTurro, 1986, p. 886.. They ‘‘provide a vehicle for the definition and solution of scientific puzzles’’ by
providing a ‘‘constellation of beliefs, values, techniques and methods that are shared by a community of
practitioners.’’ Žibid... 6 A consequence of this is that scientific knowledge cannot be abstracted successfully
from the social contexts that give similarity meaning. Words in scientific papers are not only words they are also
acts, that are intended to contribute towards what Oakeshott has called an ‘on-going conversation’. 7

4
As Polanyi Ž1969. Žp. 44. remarked ‘‘ . . . while tacit knowledge can be possessed by itself, implicit knowledge must rely on being tacitly
understood and applied. Hence all knowledge is either tacit or rooted in tacit knowledge’’.
5
The social mechanisms by which these changes in category take place is described in Larkoff Ž1987.. I am grateful to Prof. Dosi for this
reference and C. Nelson for ideas on the flexibility of similarity spaces.
6
Any problems that turn up with the paradigm are suppressed as anomalies, or accommodated in a ‘re-articulation’ where the shared
rules of the paradigm are changed to fit the disparity. The paradigm ‘‘will survive until viable alternative is found . . . Ad hoc modifications
are usually a more preferable response than tossing out the conventional paradigm without a suitable replacement, in science as in industry,
retooling is expensive and disruptive’’ Žibid., p. 887..
7
Attempts to explain the internal perspective on the ‘conversation’ from the outside in terms of information, utility, social forces,
language games, etc., are attempts to ‘‘use language to get between language and reality’’ and do not have any epistemological privilege
ŽNagel, 1997.. The embodied nature of knowledge is concerned with the causes of out perceptions rather than the reasons for our beliefs.
P. Nightingaler Research Policy 27 (1998) 689–709 693

Section 2 will outline a simplistic theory of knowledge, science and what technology ‘is’. These three
intellectual building blocks will then be used to show why the linear model does not and cannot work, and then
to provide an alternative Cognitive approach.

2. Building block 1: knowledge as a cognitive process

The first building block explores knowledge, and specifically tacit knowledge. Searle Ž1995. Žpp. 130–131.
argues that tacit knowledge is vital to our understanding of even very simple words like ‘cut’ in the sentences
‘cut the grass’ and ‘cut the cake’. The word ‘cut’ is used in the same way in each sentence, but what counts as
cutting varies with context. If one were to run out and ‘stab the grass with a large knife’, or ‘run over a cake
with a lawnmower’, there is a ‘‘real sense in which one did not understand the meaning of the request’’ Žibid...
Even though the word ‘cut’ is being used in the same way Žibid... We know the appropriate meaning because
we have tacit background knowledge to compare the words to. 8 Rather than the words themselves carrying
meaning, they are related to a tacit background that provides their context.
This tacit background enables us to ‘see as’, rather than simply ‘see’, as we actively interpret our experiences
rather than passively receive information ŽGregory, 1980, 1981, p. 383.. As such ‘perceptions are hypotheses’
because we perceive by relating information from our senses to a learnt background of experience to
hypothesise a best fit ŽGregory, 1980.. This background knowledge acts to actively fill in missing parts of
patterns, so that we can understand sentences like ‘‘yxx cxn xudxrstxbd whxt x xm wrxtxng xvxn xf x rxplxcx
xll thx vxwxls wxth xn ‘x’ Žt gts lttl hrdr f y dn’t vn kn whr th vwls r.’’ ŽPinker, 1994, p. 181.. The information
is not in the distorted sentence, but rather is given to it by the mind.
This tacit background knowledge gives us the capacity to interpret information and comprehend things that
cannot be codified, like how to ride a bicycle. 9 Thus, tacit knowledge is both the backgrounds of interwoven
experience and the automatic capacity we have to relate experience to it. 10 It is hard Žif not impossible. to
codify and transmit because it is the background to which codified transmitted information is compared.
As we experience more this background builds up and we learn. Learning is not just getting more
information, as it involves recognising patterns and connections between memories. 11 The more background
knowledge we have the more context we can give to our experiences and with this the larger our capacity to
understand becomes. For example, while someone might know what a ‘heart’ is, there is a very real sense that a
heart surgeon knows more. Because he or she has a larger background knowledge of interwoven experiences to
compare new experiences to.

2.1. Knowledge: perceiÕing and extrapolating patterns

Because biochemical processes in the brain link experiences to ‘similar’ memories we can extrapolate
linkages and have a contextual understanding of what is likely to happen next, in Edelman’s phrase we live in
the ‘remembered present’. For example, our memories of dropping glasses on stone floors, links up with the

8
This point is clear when the same ‘information’ can be perceived in two different ways. Thus we can perceive the same Jastrow
rabbitrduck Gestalt as either a rabbit or a duck even though the picture is the same ŽWittgenstein, 1953.. But a Martian who had no
experience of either ducks or rabbits would only see the Gestalt as a line drawing.
9
Tacit knowledge has previously been explored by Fergusson Ž1766., pp. 50–68, Polanyi Ž1961, 1962. Polanyi Ž1966, 1967, 1969.,
Oakeshott Ž1951, 1969, 1975., Wittgenstein Ž1969., Hayek Ž1962, 1957, 1967. and more recently Taylor Ž1989., p. 491, Reber Ž1989,
1990., Kihlstrom Ž1987. and Searle Ž1983, 1990a,b, 1992, 1993, 1995., Chap. 6, but the tradition stretches back to Aristotelian Phronesis
and the distinction between techne and praxis.
10
At the biological level these two things are the same.
11
For a discussion of the re-emergence of learning theory see Glaser Ž1990. and Reber Ž1989, 1990..
694 P. Nightingaler Research Policy 27 (1998) 689–709

memory of the glass smashing Žthough maybe not all the time.. So when someone tells us that they have
dropped a glass on a stone floor, part of its meaning is ‘that it is likely to have smashed’ because there is a
perceivable pattern between the two actions. Thus knowledge is the capacity to act on patterns in our
experiences that we have perceived by relating them to a learnt tacit background. We exercise this capacity
when we properly recognise and extrapolate a pattern. Thus, a mechanic can know what is wrong with a car
engine by listening to the sound it makes. This is possible because over a period of time he has learnt to
recognise a pattern between the sounds that engines make and what is wrong with them. This learnt experience
forms the tacit background knowledge that the mechanic compares a new sound to. Although the mechanic
would never have heard the exact car engine before, he can recognise it by fitting it into a pattern of similar
sounding engines with similar problems.

3. Building block 2: science as pattern

This second building block will extend the notion of knowledge being the ability to extrapolate patterns to a
very oversimplified and stylistic conception of science. 12 Science is then the social practice of exploring and
codifying patterns in the behaviour of nature. These patterns are actively explored, rather than passively noted.
This recognition and codification of pattern occurs at three levels. Firstly the patterns and regularities in the
behaviour of the natural world. Secondly, the patterns that we tacitly perceive in our mind’s eye, both in sense
experience and between patterns themselves. 13 Thirdly, the patterns in mathematics that scientists explore and
relate to other abstract patterns in our minds, on paper and on computer screens. 14

The key point is that all these levels of pattern are isomorphic Žthe same shape.. Or to be more precise they
are topologically equivalent. They may not be the same shape but key features share a similar one-to-one

12
The concern is to relate conceptions of symmetry and symmetry breaking to perception, scientific laws, mathematics and technical
change rather than explore the role of science in society.
13
This does not mean a representational theory of perception. For a discussion of the relation between the three see Wigner Ž1960., Kline
Ž1972, 1985., Penrose Ž1989., Chap. 3, Barrow Ž1988, 1991..
14
See Stewart and Tall Ž1977., Chap. 1, and Skemp Ž1971. for an excellent discussion of mathematics as pattern recognition.
P. Nightingaler Research Policy 27 (1998) 689–709 695

relationship. For example, there is a similarity between the shape that all balls make when thrown up in the air,
Ži.e., they all produce a parabola. the shape that we ‘see’ when we imagine what a ball thrown up in the air
would look like; and the shape a mathematical graph describing the dynamics of a ball projected upwards
against gravity. All three levels produce the same shape. Scientific knowledge comes from the ability to form an
abstract correspondence between all three levels of pattern. This provides the capacity to explore patterns on one
level and apply those patterns on another level.
This ability to perceive patterns reduces the amount of information needed to understand the world and hints
at an underlying order that can be abstracted by recognising patterns. Barrow Ž1988, 1991. describes science
using the metaphor of information compression: ‘‘The goal of science is to make sense of the diversity of
Nature. It is not based upon observation alone. It employs observation to gather information about the world and
to test predictions about how the world will react to new circumstances, but between these two procedures lies
the heart of the scientific process . . . the transformation of lists of observational data into an abbreviated form by
the recognition of patterns. The recognition of such a pattern allows the information context of the observed
sequence of events to be replaced by a shorthand formula that possesses the same or almost the same
information content’’ ŽBarrow, 1991, p. 210.. 15
The ability to compress information indicates a degree of ‘underlying order’ ŽPackel and Traub, 1987;
Chaitlin, 1990; Barrow, 1995, p. 46.. 16 This search for information compression and underlying order is the
key point that differentiates science from stamp collecting. Scientists do not just assemble facts, they also find
patterns in those facts.
Turro makes the point well ‘‘The beginning student of organic chemistry is often bewildered by what appears
to be an enormous maze of random structural variations and reactions that can be mastered only by tedious
memorisation. To the organic chemist, however, the same subject is often a beautifully ordered discipline of
elegant simplicity. An important value of learning organic chemistry is the mastering of ‘organic thinking’, an
approach to intellectual processing whereby the ‘sameness’ of many families of structure and relation is
revealed’’ ŽTurro, 1986, p. 882..
Turro Ž1986, p. 882. suggests that intellectual processing involves creating a ‘‘stable and self consistent
interpretation of a phenomena or event’’. This is done by fitting data into pre-existing patterns, and secondly by
recognising and generating new patterns. When patterns are subjected to tests and found to pass they are
reinforced and when they fail they are rearticulated, which itself may entail the further testing of underlying
assumptions. 17 Section 3.1 will explore the notion of patterns in a little more detail.

15
Similarly Skyes Ž1986. starts a text on ‘Organic Mechanism’ with the following justification: ‘‘The chief advantage of the mechanistic
approach, to the vast array of disparate information that makes up organic chemistry, is the way in which a relatively small number of
guiding principles can be used, not only to explain and interrelate existing facts, but to forecast the outcome of changing the conditions
under which already known reactions are carried out, and to foretell the products that may be expected from new ones’’ ŽSkyes, 1986, p. 1..
Thus patterns can exist as guiding principles that ‘explain and interrelate’ existing facts, and can be extrapolated to ‘forecast’ and ‘foretell’
the future.
16
See Hardy Ž1928., Feynman Ž1965., Kline Ž1972, 1985., Penrose Ž1974., Dirac Ž1982.. This is a complex process that involves the
integration of the ‘model’ with formal mathematics.
17
In hindsight science may seem to proceed by a logical process of falsification, but the information that causes these falsifications is
seem in the context of interwoven cognitive assumptions. For example Barrow Ž1988., p. 351, quotes the physicists Richard Feynman and
Murray Gell-Mann who noted that their theory did not fit in with experimental evidence about the distribution of electron’s neutrinos in the
decay of helium-6. They did not abandon their theory under the weight of this ‘falsification’ but decided that the mathematical elegance of
the theory was more powerful than the experimental evidence. They wrote: ‘‘These theoretical arguments seem to the authors to be strong
enough to suggest that the disagreement with the He-6 recoil experiments and with some other less accurate experiments indicate that these
experiments are wrong.’’ Žibid... The experiments were later found to be wrong as experiments sometimes are.
696 P. Nightingaler Research Policy 27 (1998) 689–709

3.1. Knowledge as pattern: the nature of patterns

The idea of knowledge as the capacity to recognise patterns is complicated because the different levels of
pattern can diverge from each other. Thus a mathematical model can deviate from the real world. The two most
common reasons for deviations are symmetry breaking and non-linear error growth. In the real world we do not
observe the laws of nature directly, but instead observe their outcomes and abstract back to the laws of nature;
for example we observe a ball falling back to earth and abstract the laws of gravity from that behaviour. In
general the symmetries of the laws of nature are conserved between the law and its outcome, but sometimes this
conserved symmetry may be unstable. 18 For example, a law of nature may end up with a pencil balanced on its
point, conserving the symmetry between the laws of nature and its outcome. But this end result and its
symmetry are unstable as any deviation, Ždown to quantum fluctuations. will cause the pencil to fall, breaking
the symmetry ŽBarrow, 1988, p. 210, Barrow, 1995, p. 50; Ruelle, 1991, p. 40.. The original laws of nature
contain no information about the direction in which the pencil fell, and therefore extra information is needed to
describe its new position. Symmetry breaking allows the simple laws of nature to interact with one another to
produce a complex universe. This complexity then requires extra historical information Žbeyond any information
contained in the fundamental laws of physics., to be fully described. As a consequence scientific patterns cannot
be extended indefinitely.
The problem of symmetry breaking is made worse by non-linear error growth, that occurs when small errors
in the initial conditions lead to larger errors later on, the so called Butterfly effect ŽStewart, 1989; Ruelle, 1991..
For example if one was simulating ‘the Laplacian billiard ball universe’ everytime a ball impacted with another
the difference in angle between the real world and the simulations would double. So after only ten impacts it
would be impossible to say where on the table the ball would be, even if the centres initially differed by only a
micron per second ŽRuelle, 1991., p. 42. 19

3.2. The behaÕiour of complex phenomena

In simple entities, where there are no interactions, patterns of behaviour can generally be described using
differential equations containing the algorithmic structure, the initial conditions, and various constants of nature
that are unchanged by the application of the algorithm ŽBarrow, 1988, p. 279.. For more complex phenomena
where more than one set of forces determines behaviour the outcome is the result of the specific magnitude and
manner of the interactions. Recalling the pencil balanced on its tip, the interactions determine the direction it
will fall and the magnitude of the fall. Consequently, these complex systems can undergo qualitative changes in
behaviour when conditions change, as the stable balance between opposing forces can slip and new forces can
take over. For example, supplying excessive energy to a chemical bond will break it. They are termed ‘phase
changes’ and would include things like water freezing to form ice.
The notion of fundamental forces interacting to produce stable entities, whose behaviour can flip between
different phases, is the starting point for understanding complex technology. Complex natural and artificial
systems can have stable states that act as ‘attractors’, so that if the system is displaced from its low energy state,
the system will adjust until the stable state is reached. For example a marble in a valley will fall down until it
reaches the bottom. The final end position is the dynamic attractor and all the possible starting conditions that
end up in the attractor constitute the ‘basin of attraction’. No matter where in the basin the marble starts it will
always end up in the attractor.

18
See Carpinelli et al. Ž1996. for an experimental observation of this process at work. Pierre Curie in 1894 originally proposed that
symmetry had to hold. See Stewart and Golubitsky Ž1992., pp. 8–11.
19
Barrow Ž1988., p. 277, points out that even if the accuracy is maintained down to Heisenberg’s limit—‘‘less than one billion times the
size of a single atomic nucleus’’—the difference between the mathematics and the real world would be larger than a billiard table after only
15 collisions.
P. Nightingaler Research Policy 27 (1998) 689–709 697

Complex entities can have a series of stable behaviours, described in terms of a series of possible ‘attractors’
in multidimensional phase diagrams. Changes in the environment can cause the system to move into different
attractors, causing qualitative changes in behaviour. For example, a car in neutral would be in a steady state but
when put in gear would start to move. The range of possible behaviours would be very large but the car would
be limited in its behaviour in several respects, i.e., it could not fly. Part of the innovation process involves
finding the intrinsic parameters of these areas of behavioural stability and ‘tuning’ them so that they match
design criteria.
The specifics of the parameters that allow these regions of stability to be tuned, are not contained in the
original laws of nature. They are instead the result of historical symmetry breaking. As a consequence, a very
important consequence, knowledge about what these parameters will be cannot be obtained from first principles.
It has to be found by trial and error. The simple laws of nature generate symmetrically unstable outcomes that
interact with their surroundings to collapse symmetries and generate extra information. The importance of this
extra information increases as the complexity of entities increases, because the number of possible interactions
increases faster than the number of objects. 20
As a consequence of this symmetry breaking there is a difference in how well science can explain and how
well it can predict behaviour. Science cannot explain everything as science works by compressing information
and symmetry breaking ensures some information escapes compression. When science is used to explain, it
moves from concrete phenomena to abstracted patterns in its behaviour, leaving behind all the symmetry
breaking information, that makes any situation specific. But when it is used for prediction this extra information
is needed, and is not contained in the original laws.
This difference between explanation and prediction by science is important in understanding the differences
between science and technology. Technology cannot be the extrapolation of scientific patterns into the future
because, symmetry breaking precludes direct pattern extrapolation. For complex entities it is not possible to
reduce their behaviour to the behaviour of the underlying parts.
One consequence of this is an inherent difference between scientific uncertainty and technological uncer-
tainty. When exploring patterns with no symmetry breaking, science can use mathematics to get very accurate
predictions. For example, relativity predicts aspects of the behaviour of binary pulsars to one part in 10 14
ŽPenrose, 1989, p. 198. and quantum electrodynamics ŽQED., predicts the ‘magnetic moment’ of an electron as
‘‘1.00115965246 . . . whereas the most recent experimental value is 1.001159652193 Žwith a possible error of
about 10 in the last two digits.. As Feynman has pointed out, this kind of accuracy could determine the distance
between New York and Los Angeles to within the width of a human hair’’ ŽPenrose, 1989, p. 199.. But with
technology, symmetry breaking and non-linear error growth must be taken into account. They in turn depend,
very subtly, on both the magnitude and nature of large numbers of interactions, the details of which may not all
be held in one mind. As a consequence predictions are hard and bridges still fall down.

4. Building block 3: technology as artificial function

Section 4.1 explores technology, and argues that it is understood in terms of its function, and that this
function can only be understood tacitly. Because innovation involves moving ‘from the given to the possible’ it
requires a conception of the intended possibility. This is done by generating a conception of how a technology
will function and then testing it, with the results feeding back through an iterative learning process leading to

20
Stewart and Cohen Ž1994., p. 182, point out that there are 45 possible interactions with 10 objects, 94,950 with 100, 499,500 with 1000
and 499,999,500,000 with one million. These interactions Žand their resulting symmetry breaking. are not described by the original laws of
nature, and therefore have to be included historically.
698 P. Nightingaler Research Policy 27 (1998) 689–709

modification of the original design. What Layton Ž1974., p. 696, calls ‘‘the purposive adaptation of means to
reach a preconceived end’’.

4.1. PreconceiÕed ends and assigning functions

Layton’s phrase ‘preconceived end’, contains the seed needed to understand the tacit nature of engineering
knowledge. Technology involves transforming the world to fit some preconceived end; adapting means in order
to do something; to produce entities that fulfil functions. Cars are for driving, drugs are for changing
biochemical processes, and screwdrivers exist to exert torque on screws. Here technology is defined as artificial
function. This conception of technology is in two parts as functions are not simple concepts. It involves a
cognitive element that relates to our tacit understanding of how the technology ‘should’ behave, and an element
that describes the intrinsic physical properties of a technology that make it behave in a given way.
Searle Ž1995. has argued that these functions are not intrinsic properties. Thus a computer disk can function
to store computer data, or it can function to stop a hot coffee cup marking the table, or it can function to stop a
table wobbling when placed under a short leg. It’s function in each case is dependent on its relation to other
things, rather than being fixed from the inside. 21 This ‘purpose’ is not intrinsic to their physics, because the
physics of the technology has no conception of purpose, it just ‘is’. Thus it is possible for objects to function
well and to function badly, but this normative component exists only in relation to a tacit understanding of how
they should behave. This relational nature is most obvious when technologies are defined by their function even
when they fail to perform it. For example, a safety valve is still a safety value with the function to stop
explosions, even if it malfunctions and fails to do so ŽSearle, 1995, p. 19..
Therefore, to understand a technology, one must understand how it interrelates with other objects. The ‘facts
of the matter’ are not a single ‘fact of information’, but rather are only understandable as interwoven
relationships and patterns of behaviour. Searle Ž1995., p. 131, gives an example of a simple technology that can
be used to show how its functional understanding is tacit. He discusses the sentence ‘‘She gave him the key and
he opened the door. There is much discussion about whether . . . it is actually said Žor merely implied. that he
opened the door with that key . . . but it is generally agreed that there is a certain underdetermination of what is
said by the literal meaning of the sentence. I wish to say that there is a radical underdetermination of what is
said in the literal meaning of the sentence. There is nothing . . . to block the interpretation, He opened the door
with her key by bashing the door down with the key; the key weighed two hundred pounds and was in the shape
of an axe. Or, he swallowed both the door and key, and he inserted the key in the lock by peristaltic contraction
of his gut . . . the only thing that blocks those interpretations is not the semantic content but simply the fact that
you have a certain set of abilities for coping with the world, and those abilities are not and could not be included
as part of the literal meaning of the sentence.’’ It is only possible to understand how things function if we
already have a prior understanding of their relation to the world. We know that the key was not two hundred
pounds and in the shape of an axe, because we have previous experience of how keys work and what they are
like. Our understanding of function exists in our experience of what functions are. To put it crudely technology
exists in the semantics and not the syntax.
The tacit nature of functional solutions can be compared to the tacit nature of ‘problems’. Problems like
functions are not simple entities. Something being a ‘problem’ can only be understood by tacitly weighing up
the relative importances of interwoven and often contradictory economic, social, aesthetic and political criteria.
For some technologies the relative importance of all the interrelated normative components is obvious. Cures for

21
Although being a computer disk is an objective fact, it is only true in relation to other objects. The intrinsic nature of a computer disc
stop it being able to function as a parachute or a drug, Žrather it could but it would just function very badly.. Its physical nature, that
prevents it acting as a parachute, would be the same even if no one where around to say so. But, its acting as a coffee coaster is the result of
human action.
P. Nightingaler Research Policy 27 (1998) 689–709 699

cancer have an over-riding consideration, to save the patient at almost any cost, and as such, can have far worse
side effects than simple headache cures. 22 In others the technologist must weight up the nature of the problem
in all its interwoven complexity. 23 Often these ‘relative importances’ are either unknown at the outset,
unknowable or change as the technology evolves. 24 But they are only comprehendable against a tacit
background knowledge. Tacit knowledge is therefore needed to understand a problem, and to comprehend its
solution.
Our understanding of technology as artificial function is in two parts. The first describes its ‘physics’ and is
intrinsic to it. The second describes its function, that is what its physics is used for. And this is dependent on
human intentionality. Different human agents, or social groups can use exactly the same technology for different
things. And their understanding of what it is will depend on their tacit knowledge, and therefore their learnt
experiences.

5. The direction argument

Having the three building blocks in place it is now possible to move on and explain why the linear model
fails, before attempting to outline an alternative. The reason that science cannot simply be applied to
technology, in the way the linear model proposes it can, is because science and technology are going in opposite
directions. Hence the ‘direction argument’.
This is easier to understand if one thinks in terms of starting conditions and end results. For example, if one
were trying to predict the trajectory of a cannon ball, the starting conditions would be its mass, the energy from
the explosive charge, and the direction and elevation of the cannon. Using simple Newtonian mechanics it
would then be possible to move from these starting conditions and predict an approximate end result. If the
starting conditions are known at time t1, it is possible to extrapolate the scientific pattern that relates then and
predict unknown end results at time t 2.
This is explicitly not the situation during innovation. At the start of the innovation process one has a rough
idea of the end result one wants to achieve, but the actual starting conditions required to produce that end result
are unknown. 25 Not the other way around. For example in the pharmaceutical industry, the end result would be
to stop a disease, but the starting conditions, that is the chemical makeup of the molecule that will stop it, and
the mechanism of its action will be, unknown. Technical questions therefore start with a desired effect and want

22
But having said that powerful pain killers like diamorphine are not given to terminal cancer patients because there is a perceived risk
that they might be obtained by drug addicts.
23
Vincenti Ž1990., Chap. 3, p. 51, provides an example of how complex this process can be: Early pilots noticed that planes that were too
stable were hard to fly because they would shift out of equilibrium with every gust of wind. Aeronautical engineers therefore had to design a
plane with the subjectively correct amount of instability. They had to ‘‘translate an ill-defined problem, containing . . . a large subjective
human element, into an objective, well-defined problem . . . the engineering community did not know . . . what flying qualities were needed by
pilots or how they could be specified’’. It took many years to understand the relationship between the objective facts of design and the
subjective ‘gut feelings’ of the test pilots. Once understood the engineers could better know their relative importances and use this
understanding to propose solutions.
24
For simple technologies that have only local effects the recognition of what the problem ‘is’ is a simple matter. For technologies that
have wide ranging, large scale effects, like a nuclear power station, typical of a Complex Product System ŽCoPS., the nature of the problem
will be different for various different political groups in society. In such cases, the definition of what ‘the’ problem is will depend on
political interactions between, for example, environmentalists, the potential workforce, energy consumers, local residents, contractors, the
military, etc. Whose view of what the problem is, let alone how it should be solved is a political question. The problem of defining ‘the
problem’ is made worse as typically CoPS take several years to produce and during that time the political power of different agents to
redefine the problem changes.
25
Often this intended result is very unspecific, and can be very different from the eventual outcome. Allowing the ‘intended result’ to
change introduces to many complexities for an initial treatment of the theory.
700 P. Nightingaler Research Policy 27 (1998) 689–709

to know what will cause it. In essence science is a one-way street and technology is going in the wrong
direction. This argument will become clearer once an alternative is in place. And this alternative makes up the
innovation cycle argument.

6. The innovation cycle argument


If the unknown starting conditions, required to match Žknown. desired end results, cannot be found by back
extrapolating scientific patterns, the question arises how is the process done. The answer can be found by
exploring the nature of ‘similarity’. 26
When designers and engineers imagine a functional solution to a problem, they start with a given background
of tacit design knowledge. They cannot just apply scientific knowledge or patterns and expect a result to simply
pop out. Instead, the technologists must ‘see’ how the problem they face relates to similar problems they have
faced in the past.
By assuming that similar problems will have similar solutions, the technologist can take the problem with an
unknown solution and compare it to a similar problem with a known solution. He can then assume the solutions
will be alike. This process can be seen in the diagram. The technical problem is to provide a set of starting
conditions that will produce the desired end result, marked ‘ X ’, that would solve some design problem. But this
set of starting conditions Y, cannot be found by back extrapolation of scientific patterns. Instead, the
technologist recognises the problem as similar to a previously solved problem, marked X 2 for which the
solution parameters Ži.e., the starting conditions that will produce the desired function. are known. By
extrapolating from X to X 2 and working back to the known starting conditions Y 2 , the technologist can then
assume that Y will be similar to Y 2 , and have an uncertain understanding of what the desired solution will be
like.

This process of working backwards by noting similarity, is used in two ways. Firstly, to resolve general
problems down to specific solvable problems, by a process of recognising similarity within a socialised

26
As noted earlier similarity is not a simple concept. It is not given in the sense that intrinsic properties are, but is instead dependent on Ži.
‘context’ that is itself dependent on learnt traditions of understanding, and therefore is embedded in learnt social networks, and Žii. the
physical makeup of the brain, Ži.e., it is embodied..
P. Nightingaler Research Policy 27 (1998) 689–709 701

technological tradition. Once a solvable problem is in place, it is used to take the initial ‘similar’ solution, and
analyse and modify it until it has been ‘tuned in’. This process of tuning technology will be discussed after the
process of resolving general problems to specific problems is illustrated using an example from the pharmaceu-
tical industry and the Rational Drug Discovery paradigm.

6.1. ResolÕing problems

Within the Rational Drug Discovery paradigm, the initial problem is the very general one of finding a drug to
stop a given disease. As such, the desired end result is the disease being stopped, and the technology question
involves finding a set of intrinsic starting conditions, in this case a specific molecule, that will stop it.
This cannot be done by applying science. Instead, knowledge based on previous experience of similar
problems is used to extrapolate a similar solution. These senses of similarity are part of ongoing traditions of
tacit technological understanding. Of interest here are the Fundamental design concepts: ‘‘These concepts may
exist only implicitly in the back of designer’s minds, but they must be there. They are givens for the project,
even if unstated. They are absorbed—by osmosis, so to speak—by engineers in the course of growing up,
perhaps even before entering formal engineering training’’ ŽVincenti, 1990, p. 208..
The first of the fundamental design concepts is the operational principle, the principle behind the way the
device works. 27 The problem provides the function that the device must fulfil and the operational principle
defines the basic way in which the machine will perform that function. Vincenti Ž1990., p. 209, remarks that
‘‘the operational principle . . . originates outside the body of scientific knowledge and comes into being to serve
some innately technological purpose’’.
Vincenti’s second fundamental design concept is ‘‘the normal configuration of the device . . . the general
shape and arrangement that are commonly agreed to best embody the operational principle’’ Žibid., p. 209.. For
example ‘‘Automobile designers of today usually Žbut not invariably. assume without much thinking about it
that their vehicles should have four Žas against three. wheels and a front-mounted, liquid cooled engine.’’ Žibid.,
p. 210.. Normal configurations are again outside science, and evolve as part of a technological tradition. For
example, the normal configuration of the Greek temple builders involved Ževolved. using features that had
originally been used to support wooden buildings. Even though a new material meant that this structural type
was not necessary, they were kept as they worked. The designer does not need to understand why the choice
was made or the problems with alternative configurations, all he will do is extrapolate it onto the new design
wc.f. Turro Ž1986. on paradigmsx. Once a tacit understanding of functionality is extrapolated the next stage is the
iterative process of turning them into reality.
Part of the implicit background tradition of the Rational Drug Discovery paradigm is the assumption that
diseases are caused by biochemical pathways, and that the way to stop disease is to stop disease causing
biochemical mechanisms. As there are a variety of different mechanisms that produce a given disease, tacit gut
feelings, based on learnt experience within the technological tradition, are used to select the best biochemical
mechanism for attack. Once a candidate is found, by very difficult biochemical research, the problem can be
resolved again. Instead of being the general ‘stop disease’ the problem has been resolved and made more
specific to become ‘stop a biochemical mechanism’.

27
The huge complexity can be illustrated by the example of Richards Ž1994., p. 379–380, of the operational principle in drug discovery.
Because ‘‘tumours receive less blood than normal cells, they then receive less oxygen and hence are differentiable physiochemically. What
is required is an inhibitor that would block DHFR wan enzyme required for cell divisionx in tumours but not in normal cells. In principle this
is achievable if the drug is capable of existing in an oxidised form which will not inhibit, or as a reduced form which will’’. Richards sets
out the task of finding a compound that will not only bind effectively, be non-toxic and reach the target tumour, but also selectively switch
itself on and off depending on where it is. Understanding the feasibility of such a drug, will require huge amounts of tacit understanding.
702 P. Nightingaler Research Policy 27 (1998) 689–709

This process continues down until the problem is specified in sufficient detail that a practical solution can be
sought. This can be seen in the diagram:

This is a very stylised process that is falsely pretending that the selection of the appropriate mechanisms is an
easy task. But, it does illustrate innovation cycle argument at work. Firstly, it clearly shows the resolution of
general problems into more and more specific problems. So the innovation task resolves from the very general
‘stop a disease’ to a far more specific ‘find a molecule to match a specific 3D binding chemistry’. This can be
seen going down the left hand side of the table above.
Secondly, the stylised innovation process shows the mechanism by which this happens. A cycle of taking a
problem and recognising that it is similar to previously solved problems, and then extrapolating similar solutions
is used. The process of answering the question ‘what causes the end result?’ is therefore a mixture of
recognising the problem as similar, and experimentation, to establish that similarity. The result of which
resolves the problem to a more specific level.
This resolution of general problems down to specific problems is fairly ubiquitous within technical change
ŽVincenti, 1990.. The initial problem may be ‘conceptual and relatively unstructured’, but once the problem
begins to resolve into more specific sub-problems, the social nature of problem generation becomes closed off
as the range of potential solutions is closed, and problems ‘‘at the lower levels, where the majority of
engineering effort takes place . . . are usually well defined, and activity tends to be highly structured’’ ŽVincenti,
1990, pp. 204–205.. By the time one gets down to the ‘nuts and bolts’ of the problem the big social and
political decisions have already been taken and the problem is almost purely technical. These ideas can be
extrapolated to provide a Cognitive model of technical change.

7. A cognitive approach to technical change

A cognitive model of innovation starts with a set of beliefs based on previous design experience, uses them
to recognise similarity between problems, as described above, and then goes on to test the proposed solution,
P. Nightingaler Research Policy 27 (1998) 689–709 703

with the results of the tests being used to modify understanding and produce improved solutions. 28 Going
around the diagram, a few points will be made before looking at how it works in Rational Drug Discovery.

Once the problems are recognised and sub-problems generated, the art and science of design come into play.
The art part is ‘functional assignment’, whereby an inherently uncertain solution to a technological problem is
imagined by extrapolating a similar solution from a previously solved similar problem. The secondary purpose
is to discover if the proposed technology works. The scientific part, where scientific knowledge of patterns is
applied, involves testing the proposed solution to see if it produces the intended end result. Thus, scientific
knowledge Žof patterns. is not applied directly to produce technology, but indirectly to help test uncertain
functional solutions, which have been produced by following technological traditions. Analysis and testing
allows designers to understand how changing the starting conditions effects the end result. This knowledge can
then be built up and extrapolated Žin the direction of the intended result. to ‘tune’ the technology to produce its
intended behaviour. Going around the innovation cycle this new understanding is used to modify the functional
solution which enters another cycle of testing until a satisfactory solution is found.
The process is intrinsically uncertain because uncertain patterns of behaviour are being extrapolated into the
unknown. From paper clips to oil platforms, engineers can only know ‘for sure’ about failure after the event.
They must therefore rely on getting as good an understanding as possible and making sure that their design errs
on the side of caution. Petroski Ž1985., p. 44, views technology as hypothesis, when it fails the hypothesis is
disproved. He notes that ‘‘absolute certainty about the failproofness of a design can never be attained, for we
can never be certain that we have been exhaustive in asking questions about its future’’ Žibid...

28
Section 8 will illustrate a simplified cognitive model where innovation takes place in one mind. The model for the more realistic
situation of multiplying mutually incompatible tacit conceptions of function has been developed but introduces to many complexities for an
initial treatment.
704 P. Nightingaler Research Policy 27 (1998) 689–709

To perform this analysis, engineers and designers require theoretical tools ŽVincenti, 1990, p. 213.. These
include mathematical theories, scientific laws and phenomenological theories ‘‘that are device specific, have
little explanatory power or scientific standing . . . Engineers devise them because they must get on with their
design job and the phenomena in question are too poorly understood or too difficult to handle otherwise . . . They
are used because they work, however imperfectly and because no better analytical tools are available’’
ŽVincenti, 1990, pp. 214–215.. 29
These theoretical tools are complemented by quantitative data, that takes many forms, from physical
constants, to complex reaction rates. As most technology is either mathematically or physically very complex,
much of this data has to come from empirical work, pilot plants or prototypes.

8. Applying the model and reducing design space

These processes can be seen at work in the Rational Drug Discovery paradigm discussed earlier. Having
gone around the design cycle and resolved the problem to specifics, as described above, the final resolved
problem involves finding a specific molecule that matches a specific 3D chemistry, so that it will bind to a
protein, disable it and prevent it causing a disease.
As Bradshaw Ž1995. points out, the job of the medicinal chemist is one of numbers reduction: There are:
10 180 possible drugs, 10 18 molecules that are likely to be drug like, 10 8 compounds that are available in
libraries 10 3 drugs, but only 10 2 profit making compounds. Drug discovery involves reducing the ‘molecular
space’ that profitable drugs will be found in, to a small enough volume that empirical testing can take place.
Now if there are only, 10 78 particles in the universe, it is obvious that there cannot be any correspondence type
theory of knowledge at work. There is no way that 10 18 individual brain states corresponding to each molecule,
let alone 10 180 . Instead, they are understood as potentially extrapolatable patterns of features.
So, when the molecular space is reduced, the chemist can reject all the compounds that are likely to be highly
toxic, like dioxins or cyanides, without having to list through all the potential compounds. The whole family can
be rejected based on shared common properties, in this case, because of having specific chemical groups
attached to them, or on family resemblances, based on a more holistic category. 30 So the medicinal chemist
will intuitively guess what type of chemical is likely to bind to the protein’s active site and test a representative
member of the family. If the experiment is a success then the medicinal chemist will look in more detail at a
finer level of aggregation within the family. If it is a failure, that failure will be extrapolated to the other
members of the family and the results of the experiment will be used to bias a guess as to the next most likely
candidate. The whole process involves going around the design cycle building up knowledge on the relationship
between chemical composition and biological activity.
The medicinal chemist uses this built up knowledge to select molecules that are potentially similar to the
desired molecule. This sense of similarity is termed ‘chemical intuition’ and is a form of tacit knowledge that
allows some medicinal chemists to recognise potential drug-like molecules for testing. While a novice might see
a simple molecule, a medicinal chemist with years of experience and built up tacit ‘chemical intuition’ can
recognise the same molecule as more or less drug-like, and therefore as a more or less likely candidate for
testing. Just as the electronic engineer can see components ‘as’ things, this tacit knowledge allows the medicinal
chemist to relate form to function.

29
In support of the functional Žand tacit. argument about technology Ferguson Ž1992., p. 11, notes that: ‘‘The purpose of the engineering
sciences, however, is not to record ‘laws of nature’ but to state relations among measurable properties—length, weight, temperature,
velocity, and the like—to permit a technological system to be analysed mathematically’’. They involve relative relations not absolute ones.
30
By holistic category, I mean something like, ‘things you take on a picnic’, that unlike the category ‘white things’ may not all share a
common feature apart from being taken on a picnic.
P. Nightingaler Research Policy 27 (1998) 689–709 705

This closing down of chemical space is repeated until a ‘lead compound’ is found that binds tightly to the
protein. Once a lead is found a process of optimisation is used to find ‘similar’ compounds that are better drugs.
The process involves going around the design cycle of proposing and testing drug candidates so that
understanding of the relationship between form and function is built up. This understanding is then used to
select possible candidates for clinical and animal testing. The medicinal chemist therefore solves his design
problem using a tacit understanding of the relationship between cause and effect Žin this case chemical structure
and biological activity. to recognise what sort of solutions will produce the desired effect. And therefore, what
sort of molecules will not be drugs. This knowledge is then used to eliminate whole families of molecules and
move into an area of ‘chemical space’ where the solution is more likely to be found. Experimental evidence is
used to clarify understanding and resolve the solution space to a small enough sample for empirical tests to find
and optimise a ‘lead’ compound.

9. The role of scientific knowledge in technical change

The cognitive framework presented here provides a new way of understanding the role of science in
technology. It argues that scientific knowledge of patterns in nature, although inaccurate, provides an extra route
by which tacit assumptions about behaviour can be tested and modified. Scientific knowledge is not being used
to produce answers, but rather to produce understanding about how technologies work, or more often do not
work. This understanding reduces technical uncertainty and helps reduce the number of experimental dead ends
that are explored.
Scientific knowledge therefore has three roles in technical change. Firstly, it allows patterns of behaviour to
be understood and predicted. This allows technologists to have an understanding of the reasons why technology
behaves as it does, and therefore an understanding of how potential changes to the technology will affect its
behaviour.
Secondly, scientific knowledge can be used to screen alternatives before they are tested empirically. A given
technological problem, and in turn its solution, will be bounded by various criteria; a certain size, or weight, or
density, etc. Scientific knowledge can be used to perform approximate tests to ensure that potential designs meet
design criteria.
Thirdly, scientific knowledge about how the world works can be used to understand how things function.
These functions can then be extrapolated to novel situations were a similar problem needs to be solved. For
example Edison developed a deep understanding of how air pressure changes can produce signals in the human
ear. He was then in a good position to design the microphone, a technology that operates according to similar
mechanisms.

10. Conclusion

The paper has made two main arguments. Firstly, that scientific knowledge cannot be directly applied to
produce technology because it answers the wrong question. Innovation progresses from a known, desired end
result to find the starting conditions that will produce it, while scientific knowledge, in contrast, can only be
used to move in the opposite direction, from known starting conditions to an unknown end result. Secondly, that
this ‘direction’ problem is overcome by following tacitly understood technological traditions based on embodied
and embedded conceptions of similarity. These technological traditions provide a mechanism that guides
innovation and allow problems that are initially nebulous and very general to be resolved to specific problems
and solved. As a consequence the theory has explained why the linear model fails, because ‘science answers the
wrong question’, why tacit knowledge is so important to innovation, because ‘functions are tacitly understood’,
and why technological change is inherently uncertain, because of symmetry breaking in the pattern of behaviour
706 P. Nightingaler Research Policy 27 (1998) 689–709

Table 1
Pharmaceutical Combinatorial Aerospace Software Chemical Software 2
Chemistry Engineering
Basic Biological Basic Biological Project Definition Architectural Design Major Recycles System Objectivesq
Research Research Operational concepts
Find ‘functional Find ‘functional Overall Design Abstract Spec Process Selection Define System
protein protein Layout Requirements
Find its 3D structure Synthesise Major Component Interface Design Process subsections Detailed Concept of
Population Design Operations
Characterise Active Test Population Sub-division of Component Design Reactor Vessel Elaboration of
Site Component Design Design Detailed concept
Optimise Lead Optimise Lead Further Division Data Structure and Further Division
compound compound Algorithm design

of complex technology. The paper provides a theoretical explanation of what was well known empirically, that
while scientific knowledge does not directly produce technology it has a vital indirect role in the innovation
process. 31 The arguments have been verified in a number of innovation processes and used to map out their
functional break-down, some of which are listed in Table 1 below.
The empirical application of the theory has revealed a number of areas where the stylised version discussed
in this paper is deeply misleading. As a consequence care must be taken in generalising from science based
technologies like pharmaceuticals to engineering based technologies. Chief among the problems was the lack of
realism about the cognitive nature of technical change. The theory presented here is a theory of innovation that
takes place in one person’s head. In the real world innovation takes place among groups of people, in diverse
situations, with divergent experiences and possible mutually incompatible desires and beliefs.
As scholars like Mintzberg Ž1994., Leonard-Barton Ž1995., Nonaka and Takeuchi Ž1995. have shown
collective innovation is not a simple aggregation of individuals. Collective learning involves the development of
shared understandings both within an organisational setting ŽLeonard-Barton, 1992; Teece et al., 1992;
Dodgson, 1993; Teece and Pisano, 1994; Bowen et al., 1994. and between different organisations ŽHobday,
1988; Bessant et al., 1994.. This paper highlights the problems involved by stressing the very tacit nature of
knowledge and the way a shared tradition of technological understanding co-evolves with a technological
trajectory. One consequence of this is the transfer of ‘functional understanding’ within a firm is a non-trivial
problem, especially in technologies where the various sub-systems within the ‘functional breakdown’ interact in
a systemic way, as in major aerospace or software projects ŽNightingale, 1997.. As Brooks Ž1995. has argued
adding people to complex software projects causes rather than cures development problems as the amount of
communication between individuals grows at a non-linear rate. As a consequence there is a ‘crisis of control’
ŽBeniger, 1986. within the development process as increases in scale necessitate various technologies of control
over functionally diverse knowledge bases ŽGrandstrand et al., 1997.. These in turn can act as sources of
institutional inertia or ‘core rigidities’ should the business environment change ŽLeonard-Barton, 1995..
Secondly, the assumption that problems are fixed and known at the beginning of the innovation process is
extremely weak. Many of the projects studied experienced considerable variation in the ‘problem’ over the
lifetime of the project and on many occasions the political nature of ‘whose problem’ should be solved came
back to haunt the project. For example the technical problem that had to be solved during the building of the
Channel Tunnel changed fundamentally when legal changes where made to the Health and Safety regulations
while the project was underway. Within software innovation process the changing nature of the ‘problem’ that

31
What is more important for innovation is the ability to build up technological traditions of understanding. Since these trraditions are
mostly embedded in firms the failure of industry to exploit the science base is a problem of technology policy rather than science policy.
P. Nightingaler Research Policy 27 (1998) 689–709 707

the innovation process has to solve has its own name ‘requirements creep’ and a key aspect of successful project
and innovation management is keeping changes within reasonable boundaries ŽBoehm, 1996.. The extent that
the functionality of a technology will change during its innovation process will have profound effects on
successful innovation management and care should be taken in applying the model in this paper as it cannot be
assumed that all technologies are the same, have the same innovation processes and technology management
issues ŽPavitt, 1984; Rosenberg, 1994; Hobday, 1998..
While the model is far from perfect it does allow a conception of the role of science in technical change, a
heuristic understanding of how knowledge is used in innovation. In doing so it acts to complement the various
models of innovation discussed earlier and both highlights and reclaims the key role of embodied tacit
knowledge in technical change and hence theoretically grounds one of Dosis Ž1988. key features of technologi-
cal innovation.

Acknowledgements

This work was carried out within the Complex Product Systems Innovation Centre, supported by the UK
Economic and Social Science Research Council. The author would like to thank Professors Pavitt, Freeman,
Winter and Hobday and two anonymous referees for their comments.

References

Barrow, J.D., 1988. The World within the World. Oxford Univ. Press, Oxford.
Barrow, J.D., 1991. Theories of Everything. Oxford Univ. Press, Oxford.
Barrow, J.D., 1995. Theories of Everything. In: Cornwell, J. ŽEd.., Nature’s Imagination. Oxford Univ. Press, Oxford.
Beniger, J., 1986. The Control Revolution. Belknap Harvard.
Bessant, J., Lamming, R., Levy, P., Song, R., 1994. Managing successful total quality relationships in the supply chain. European Journal of
Purchasing and Supply Management 1, 7–18.
Bijker, W.E., Hughes, T.P., Pinch, T. ŽEds.., 1987. The Social Construction of Technological Systems: New Directions in the Sociology and
History of Technology. MIT Press, Cambridge, MA.
Boehm, B.W., 1988. A Spiral Model of Software Development and Enhancement. Computer, May, pp. 61–72.
Boehm, B.W., 1996. Anchoring the Software Process. IEEE Software, July, pp. 73–82.
Bowen, H., Clark K., Holloway, C., Leonard-Barton, D., Wheelwright, S., 1994. Regaining the Lead in Manufacturing. Harvard Business
Review, Sept–Oct, pp. 108–144.
Bradshaw, J., 1995. Pitfalls in Creating a Chemically Diverse Compounds Screening Library. Mimeo, Glaxo.
Brady, T., 1997. Software Make or Buy Decisions in the First 40 Years of Business Computing. PhD Thesis Žunpublished., Science Policy
Research Unit, University of Sussex.
Brooks, F., 1995. The Mythical Man–Month: Essays on Software Engineering, Anniversary Edition. Addison Wesley.
Carpinelli, J.M., Weitering, H.H., Plummer, E.W., Stumpf, R., 1996. Direct observation of a surface charge density wave. Nature 381,
398–400.
Chaitlin, G.J., 1990. Algorithmic Information Theory, revised third printing. Cambridge Univ. Press, Cambridge.
David, P., 1997. From market magic to calypso science policy. A review of Terence Kealey’s The Economic Laws of Scientific Research.
Research Policy 26, 229–255.
Dirac, P.A.M., 1982. Pretty mathematics. International Journal of Theoretical Physics 21, 603–605.
Dodgson, M., 1993. Technological Collaboration in Industry. Routledge, London.
Dosi, G., 1988. The nature of the innovation Process. In: Dosi, G. et al. ŽEds.., Technical Change and Economic Theory. Pinter, London.
Fergusson, A., 1766. An Essay on the History of Civil Society. London.
Ferguson, E.S., 1992. Engineering and the Mind’s Eye. MIT Press, Boston.
Feynman, R., 1965. The Character of Physical Law. MIT Press, Boston.
Glaser, R., 1990. The re-emergence of learning theory. American Psychologist 45 Ž1., 29–39.
Gould, S.J., 1981. The Mismeasure of Man. Penguin.
Grandstrand, O., Patel, P., Pavitt, K., 1997. Multi-technology corporations: why they have ‘Distributed’ rather than ‘Distinctive’ core
competencies. California Management Review 39 Ž4., 8–25, Summer.
708 P. Nightingaler Research Policy 27 (1998) 689–709

Gregory, R.L., 1980. Perceptions as hypothesis. Philosophical Transactions of the Royal Society of London B 290, 191–197.
Gregory, R.L., 1981. Mind in Science. Penguin Books.
Hardy, G.H., 1928. Mathematical proof. Mind 38, 1.
Hayek, F.A., 1957. The Sensory Order. London.
Hayek, F.A., 1962. Rules, Perception and Intelligibility. Proceedings of the British Academy, XLVIII.
Hayek, F.A., 1967. The Theory of Complex Phenomena. In: Studies in Philosophy Politics and Economics. Chicago Univ. Press.
Hobday, 1988.
Hobday, M., 1998. Product complexity, innovation and industrial organisation. Research Policy 26 Ž6., 689–710.
Kihlstrom, J.F., 1987. The cognitive unconscious. Science 236, 1445–1451.
Kline, M., 1972. Mathematics in Western Culture. Penguin Books.
Kline, M., 1985. Mathematical and the Search for Knowledge. Oxford Univ. Press, Oxford.
Kline, S., Rosenberg, N., 1986. An Overview of Innovation. In: Landau, R., Rosenberg, N. ŽEds.., The Positive Sum Strategy. National
Academic Press, Washington, DC.
Larkoff, G., 1987. Fire Women and Dangerous Things. Univ. of Chicago Press, Chicago.
Layton, E.T., 1974. Technology as knowledge. Technology and Culture 15, 31–41.
Leonard-Barton, D., 1992. Core capabilities and core rigidities: a paradox in managing new product development. Strategic Management
Journal 13, 111–125.
Leonard-Barton, D., 1995. Wellsprings of Knowledge: Building and Sustaining the Sources of Innovation. Harvard Business School Press,
Cambridge, MA.
MacKenzie, D., 1990. Inventing Accuracy: A historical sociology of nuclear missile guidance. MIT Press, Cambridge.
Miller, R., Hobday, M., Lewroux-Demer, T., Olleros, X., 1995. Innovation in complex systems industries: the case of flight simulation.
Industrial and Corporate Change 4 Ž2., 363–400.
Mintzberg, H., 1994. The Rise and Fall of Strategic Planning. Free Press, New York.
Nagel, T., 1997. The Last Word. Oxford Univ. Press, Oxford.
Nelson, R.R., Winter, S.G., 1977. In search of a useful theory of innovation. Research Policy 6 Ž1., 36–76.
Nelson, R.R., Winter, S.G., 1982. An Evolutionary Theory of Economic Change. Harvard Univ. Press, Cambridge, MA.
Nightingale, P., 1997. Knowledge and Technical Change: Computer Simulations and the Changing Innovation Process. Unpublished PhD
Thesis, SPRU University of Sussex.
Nonaka, I., Takeuchi, H., 1995. The Knowledge Creating Company. Oxford Univ. Press, New York.
Oakeshott, M., 1951. Political Education. Inaugural Lecture, London School of Economics.
Oakeshott, M., 1969. Rationalism and Politics. London.
Oakeshott, M., 1975. On Human Conduct. Claredon Press, Oxford.
Packel, E.W., Traub, J.F., 1987. Information based complexity. Nature 328, 29–33.
Pavitt, K., 1984. Sectorial patterns of technological change: towards a taxonomy and a theory. Research Policy 13, 343–374.
Pavitt, K., 1996. National policies for technical change: where are the increasing returns to economic research?. Proceedings of the National
Academy of Science USA 93, 12693–12700.
Penrose, R., 1974. The role of asthetics in pure and applied mathematical research. Bulletin Institute of Mathematical Applications 10 Ž78.,
266–271.
Penrose, R., 1989. The Emperor’s New Mind. Vintage.
Petroski, H., 1985. To Engineer is Human: The Role of Failure in Successful Design. New York.
Pinker, S., 1994. The Language Instinct. Allen Lane.
Polanyi, M., 1961. Knowing and being. Mind N.S. 70, 458–470.
Polanyi, M., 1962. Personal Knowledge. Chicago.
Polanyi, M., 1966. The creative imagination. Chemical Engineering News 4417, 85–99.
Polanyi, M., 1967. The Tacit Dimension. Kegan Paul, Routledge, London.
Polanyi, M., 1969. Knowing and Being. Routledge, London.
Reber, R., 1989. Implicit learning and tacit knowledge. Journal of Experimental Psychology: General 118, 219–235.
Reber, R., 1990. Implicit Learning and Tacit Knowledge. Oxford Univ. Press.
Richards, W.G., 1994. Molecular modelling: drugs by design. Quarterly Journal of Medicine 87, 379–383.
Rosenberg, N., 1994. Exploring the Black Box: Technology, Economics and History. Cambridge Univ. Press, Cambridge.
Rothwell, R., Freeman, C., Horsley, A., Jervis, V.T.P., Robertson, A.B., Townsend, J., 1974. SAPPHO updated. Research Policy 3 Ž3.,
372–387.
Rothwell, R., 1992. Successful industrial innovation: critical factors for the 1990s. R&D Management 22 Ž3., 221–239.
Rothwell, R., 1993. Towards the Fifth Generation Innovation Process. SPRU Working Paper.
Ruelle, D., 1991. Chance and Chaos. Princeton Univ. Press.
Searle, J.R., 1983. Intentionality: An essay in the Philosophy of Mind. MIT Press, Cambridge, MA.
Searle, J.R., 1990a. Consciousness, explanatory inversion and cognitive science. Behavioural and Brain Sciences 13, 585–642.
P. Nightingaler Research Policy 27 (1998) 689–709 709

Searle, J.R., 1990b. Who is computing with the brain. Behavioural and Brain Sciences 13, 632–642, Authors’ Response.
Searle, J.R., 1992. The Rediscovery of the Mind. MIT Press, Cambridge, MA.
Searle, J.R., 1993. Consciousness, attention and the connection principle. Behavioural and Brain Sciences 16, 198–203.
Searle, J.R., 1995. The Construction of Social Reality. Allen lane.
Skemp, R.R., 1971. The Psychology of Learning Mathematics. Penguin.
Skyes, P., 1986. A Guidebook to mechanism. In: Organic Chemistry, 6th edn. Longman Scientific and Technical.
Stewart, I., 1992. The Problems of Mathematics. Oxford Univ. Press.
Stewart, I., 1989. Does God Play Dice. Basil Blackwell.
Stewart, I., Golubitsky, M., 1992. Fearful Symmetry. Blackwell.
Stewart, I., Tall, D., 1977. The Foundations of Mathematics. Oxford Science Publications.
Stewart, I., Cohen, J., 1994. Why are there simple rules in a complicated universe. Futures 26 Ž6., 648–664.
Taylor, C., 1989. Sources of the Self. Harvard Univ. Press, Cambridge.
Teece, D., Pisano, G., 1994. The dynamic capabilities of firms: an introduction. Industrial and Corporate Change 3 Ž3., 537–556.
Teece, D., Pisano, G., Shuen, A., 1992. Dynamic Capabilities and Strategic Management. Mimeo, University of California, Berkeley, CA.
Turro, N.J., 1986. Geometric and topological thinking in organic chemistry. Angew Chem. Int. Ed. 25, 882–901, English.
Vincenti, W.G., 1990. What Engineers Know and How They Know It. John Hopkins.
von Hippel, E., 1988. The Sources of Innovation. Oxford Univ. Press, Oxford.
Wigner, E., 1960. The unreasonable effectiveness of mathematics in the natural sciences. Communications in Pure and Applied Mathematics
13, 222–337.
Wittgenstein, L., 1953. Philosophical Investigations. Basil Blackwell, Oxford.
Wittgenstein, L., 1969. On Certainty. Basil Blackwell, Oxford.

You might also like