You are on page 1of 25

607230

2015

ASRXXX10.1177/0003122415607230American Sociological ReviewPeterson

All That Is Solid: BenchBuilding at the Frontiers of


Two Experimental Sciences

American Sociological Review


2015, Vol. 80(6) 12011225
American Sociological
Association 2015
DOI: 10.1177/0003122415607230
http://asr.sagepub.com

David Petersona

Abstract
The belief that natural sciences are more scientific than the social sciences has been well
documented in the perceptions of both lay and scientific populations. Influenced by
the Kuhnian concept of paradigm development and empirical studies on the closure of
scientific controversies, scholars from divergent traditions associate scientific development
with increased consensus and stability. However, both the macro/quantitative and micro/
qualitative approaches are limited in key ways. This article is the first comparative
ethnography of a natural science (molecular biology) and a social science (psychology) and it
highlights important differences between the fields. Molecular biologists engage in a process
of bench-building, in which they create and integrate new manipulation techniques and
technologies into their practice, whereas psychologists have far less opportunity for this type
of development. This suggests an alternative conception of the natural/social divide, in which
the natural sciences are defined by dynamic material evolution while the social sciences
remain relatively stable.

Keywords
laboratory ethnography, science and knowledge, hierarchy of the sciences, scientific frontier

Social psychology has recently come under


unflattering scrutiny after a string of embarrassing episodes. First, a number of social
psychologists have been accused of fabricating
or unethically manipulating data (Carey 2011;
Ferguson 2012; Wade 2010; Yong 2012a).
Second, in what is widely seen as a failure of
the peer review system, a prominent journal
published an article that purported to demonstrate the existence of precognition (Bem 2011;
Wagenmakers et al. 2011). Third, and perhaps
most threatening, a heated debate was ignited
after a failed replication of one of the most
well-cited priming studies in social psychology (Bargh, Chen, and Burrows 1996; Doyen
et al. 2012). This has raised anxieties that the

entire branch of research that uses priming


methods might have deep problems (Kahneman 2012; Satel 2013; Yong 2012b, 2012c).
The natural sciences have had their own
share of scandals. Scientific articles are frequently retracted due to plagiarism, fraud, or
misrepresentations of data, and problems replicating findings are widespread (Begley

Northwestern University

Corresponding Author:
David Peterson, Northwestern University,
Department of Sociology, 1810 Chicago Avenue,
Evanston, IL 60208
E-mail: davidpeterson@u.northwestern.edu

Downloaded from asr.sagepub.com at EM LYON on December 10, 2015

American Sociological Review 80(6)

1202

2012; Fanelli 2012; Ioannidis 2005; Owens


2011). However, while the technological
accomplishments of the physical sciences has
allowed such failures to be framed as isolated,
exceptional incidents, problems in the social
sciences are often treated as providing more
evidence that these fields are not wholly
legitimate.
The rising prestige of economics has been
the subject of much attention (Callon 1998;
Fourcade, Ollion, and Algan 2014; Knorr
Cetina and Preda 2005), but social psychology is more emblematic of the trajectory and
standing of most social sciences, which
despite gaining footholds in the academy,
government, and industryhave rarely
received the prestige afforded the natural sciences. Instead, most social sciences face
ongoing debates regarding their very existence. Within social scientific fields, charges
of crisis and fragmentation stem from
fears that they lack coherence and unity
(Almond 1989; Cole 2001; Davis 1994; Gardner 1992; Gouldner 1970; Mandler 2011;
Savage and Burrows 2007; Stinchcombe
1994; Vygotsky [1927] 1997). One committee
of prominent social scientists (Wallerstein et al.
1996) diagnosed social sciences problem as
an inability to meet the expectations established by the natural sciences of prediction,
management, and quantifiable accuracy.
The apparent lack of advancement in the
social sciences has fostered a widespread
belief in a hierarchy of the sciences, in
which some fields are judged to be more scientific than others. Natural sciences like
physics and chemistry sit proudly at the top,
while the social sciences muddle feebly at the
bottom. However, empirical research has produced only mixed results trying to correlate
this intuitive hierarchy with the products or
social organizations of different fields.
Although lay opinion strongly corresponds to
this hierarchy (Lodahl and Gordon 1972;
Smith et al. 2002), it is not clear what, if anything, this intuition actually represents. There
have been some compelling developments,
but no one has been able to explain why some
sciences are more scientific.

This is because we still lack a basic understanding of how the social sciences work on
the level of practices. Nearly all the work
comparing the natural and social sciences
relies on downstream phenomena like citation
patterns and professional structures as measures of cognitive consensus. Yet, relatively
little work focuses on what social scientists
do in the process of research. Ethnographers
have shed light on the practices of physicists
(Knorr Cetina 1999; Traweek 1992) and biologists (Gilbert and Mulkay 1984; Knorr Cetina 1999; Latour and Woolgar 1979; Lynch
1985), but we know almost nothing about the
practices of social scientists. Our lack of
knowledge surrounding social knowledge
making (Camic, Gross, and Lamont 2011)
has hobbled previous comparisons across the
hard science/soft science divide.
This article bridges the gap between two
research traditions in the social studies of science. On one side is the tradition initiated by
Merton and his students, which uses mainly
macro, quantitative studies to analyze differences in scientific consensus across fields
(Cole 1983, 1992, 1994; Hargens 1988;
Lodahl and Gordon 1972; Zuckerman and
Merton 1971). On the other side is a heterogeneous set of theories and methods typically
unified under the label constructivist (Cambrosio and Keating 1988; Pinch and Bijker
1984). These studies use ethnography and
historical case studies to investigate how consensus occurs as a matter of routine practice
(Fujimura 1992; Gilbert and Mulkay 1984;
Latour 1987; Latour and Woolgar 1979;
Lynch 1985). Although these literatures have
made important contributions, they both present incomplete understandings of how the
physical and social sciences differ.
To focus on research practices and make
comparisons across a social and a natural science, I conducted ethnographic studies of five
laboratories in psychology and one in molecular biology. I demonstrate that research practices in molecular biology are organized
around embodied knowledge and research
technologies in ways that are either absent or
greatly constrained in psychology. I argue

Downloaded from asr.sagepub.com at EM LYON on December 10, 2015

Peterson

1203

that the two fields have different opportunities for creating and integrating new phenomena as part of the ongoing evolution of
manipulation regimes, a process I call benchbuilding. Bench-building occurs when scientists at the unstable and ambiguous research
frontier concentrate their efforts on the production of reliable effects through an iterative
process whereby they incorporate new techniques and technologies. When successful,
bench-building extends the horizons of
inquiry and produces practical consensus as a
byproduct, as researchers incorporate new
methods to maintain their place at the cutting
edge of their field. In fields in which benchbuilding is constrained, technological integration and embodied skill remain comparatively
underdeveloped and consensus is produced
through other means. Although not a definitive statement, this article is a contribution
toward understanding how research practices
differ across these areas.

Scientific Maturity and


Cognitive Consensus
How scientists achieve consensus is one of
the oldest and most central questions in the
sociology of scientific knowledge (Evans
2007; Shwed and Bearman 2010). It goes to
the heart of what distinguishes scientific
legitimacy from political or religious authority. Early research in the sociology of knowledge argued that scientific consensus was the
result of an objective reading of nature and
was, therefore, immune to social influence.
Thus, the internal practices of science were
outside the purview of sociological explanation (Mannheim [1936] 1985; Merton 1973).
The first sociological attempts to account
for differences in the natural and social sciences utilized Kuhns (1970) distinction
between paradigmatic and pre-paradigmatic
sciences. Because low-status sciences were
presumed to lack agreement on basic theoretical objects and methods, sociological work in
the 1970s and 1980s attempted to link scientific maturity with some version of cognitive
consensus. Consensus was measured through

downstream phenomena like citation patterns


(Cole, Cole, and Dietrich 1978; Hargens
1988; Price 1970; Zuckerman and Merton
1971, 1972). However, although natural sciences are widely perceived to have greater
agreement than social sciences on their fields
content (Lodahl and Gordon 1972; Smith et al.
2002), initial attempts to make an empirical
link between cognitive consensus and the
hierarchy of sciences were disappointing,
leading Cozzens (1985) to suggest that social
scientists engage in more qualitative work to
understand the differences.
The most significant development in this
tradition found that all scientific fields were
characterized by low levels of consensus at
the research frontier. Even high-status, natural
sciences lacked the consensus to uniformly
evaluate cutting-edge research. Yet, Cole
(1983, 1992, 1994) argues that high- and lowstatus sciences can still be distinguished based
on the presence or absence of a research
core of widely accepted facts. Findings
move from the contested frontier to the uncontested core through an evaluation process in
which scholars consider both the qualities of
the author and the cognitive content of the
contribution (Cole 1992:3953).
More recently, however, scholars have discovered quantifiable differences between the
social and natural sciences at the researcher
frontier. Smith and his colleagues (Arsenault,
Smith, and Beauchamp 2006; Best, Smith,
and Stubbs 2001; Smith et al. 2002; Smith
et al. 2000) published several studies using a
novel approach to investigate the varying
hardness of sciences. Expanding on Latours
(1990; Latour and Woolgar 1979) argument
that inscription devices (e.g., graphs, charts,
and images) are a defining feature of modern
science, they hypothesized that sciences perceived to be more scientific would allocate
more journal space to inscriptions. Smith and
colleagues (2000) found that the relationship
between fractional graph area and the intuitive hierarchy was nearly perfect ( r = .97, p <
.01). They concluded that graphs play a central role in engendering consensus (for additional evidence, see Simonton 2004).

Downloaded from asr.sagepub.com at EM LYON on December 10, 2015

American Sociological Review 80(6)

1204

Although recent quantitative studies have


made inroads into understanding both scientific consensus and the differences between
natural and social sciences, they have been
hampered by an inability to investigate actual
research practice. Quantitative studies of consensus focus on differences in the products of
scientific work and can thus say little about
the processes that undergird these differences.
For instance, because Shwed and Bearmans
(2010:834) network analysis of emerging scientific consensus explicitly brackets the content of the debates they study, they concede
that [a]ssessing consensus, of course, has
nothing to do with the truth. Likewise,
Smith and colleagues (2000:84) recognize
that increased graph use in harder sciences
may be an artefact or epiphenomenon of
some other general characteristic that distinguishes the hard and soft sciences and the
relationship between graph use and hardness
may not be specific to graph use, but rather
reflect another underlying variable.
Because this sort of quantitative research
remains outside the lab walls, there is little
chance of uncovering what this underlying
variable might be. Thus, they cannot distinguish between consensus achieved by, for
instance, domination and consensus achieved
by other means. This analytic strategy renders
sociologists unable to say what distinguishes
consensus in scientific communities from
other high-consensus groups like the military
or religious cults.1
Qualitative researchers in the constructivist tradition have used ethnography and historical case studies to breach the lab walls and
explain how consensus occurs as a matter of
routine practice (Fujimura 1992; Gilbert and
Mulkay 1984; Latour 1987; Latour and Woolgar 1979; Lynch 1985). In constructivist
accounts, a practical (rather than cognitive)
consensus occurs when controversies become
closed. This typically occurs when one side
marshals enough support to make challenging
their facts too costly for competing camps.
Yet, the vast majority of research in this
tradition has been concerned with the natural
sciences, especially physics (Knorr Cetina

1999; Traweek 1992) and biology (Knorr


Cetina 1999; Latour and Woolgar 1979;
Lynch 1985; Owen-Smith 2001). The focus
on the natural sciences has resulted in an
inability to explain why some fields face
challenges in achieving closure. In addition to
the relative dearth of research on social
science, Cole (1992) argues that the constructivist reliance on qualitative, micro-level
studies has prevented researchers from
drawing generalizable conclusions.
Although both micro/qualitative and
macro/quantitative studies have produced
important contributions to this discussion,
their limitations indicate a need for mesolevel studies that can investigate research
practices while simultaneously making comparisons across fields. The perceived success
of early lab ethnographies led to a decline in
the popularity of the method (Doing 2008;
Lynch 1997), as sociologists of science began
investigating how the laboratory was embedded within larger networks and social worlds
(Callon 1986; Clarke and Star 2008; Fujimura
1988; Heath et al. 1999; Latour 1987, 2005).
However, the lack of ethnographic research
of experimental social science labs, coupled
with the absence of comparative ethnography,
has hampered previous attempts to understand differences in research practice between
the social and natural sciences.
The present research contributes to the
understanding of social science and its relationship to natural science in three ways. First,
this is an ethnography of laboratory practices
in a social science (psychology). Several science studies scholars have investigated knowledge production in economics (Callon 1998;
Fourcade 2009; Knorr Cetina and Preda 2005;
MacKenzie 2006), and a handful of sociological studies are concerned with the discipline
of psychology (Ashmore, Brown, and Macmillan 2005; Ben-David and Collins 1966;
Buchanan 1997; Chamak 1999; Galison 2004;
Hartley, Sotto, and Pennebaker 2002; Rose
1998; Smith et al. 2000; Smyth 2001), yet little research has been done in psychology laboratories. Second, unlike most other lab
ethnographies, which are limited to the

Downloaded from asr.sagepub.com at EM LYON on December 10, 2015

Peterson

1205

observation of individual sites, I observed five


psychology labs representing two major subfields. Third, this project is the first ethnography to compare a social and a natural science.
Previous comparative ethnographic lab studies have illustrated cultural differences in
physics labs in the United States and Japan
(Traweek 1992) or differences in high energy
particle physics and molecular biology labs
(Knorr Cetina 1999). To better understand
how the practices of psychology contrast with
those from the natural sciences, I conducted a
one-year ethnography in a molecular biology
lab. Of course, use of multiple research sites
necessitates a tradeoff: space limitations
require a thinner description than is typical in
ethnography. However, the observation of
multiple sites has several significant advantages, including greater generalizability and
the ability to compare cultures.

Methods
I conducted ethnographies in five psychology
laboratories and one molecular biology lab
from 2009 to 2013. The psychology labs
included three developmental cognition labs
and two social psychology labs located across
three prestigious universities in the United
States, including one at a private university on
the East Coast, two at a private university in
the Midwest, and two at a large public university on the West Coast. I conducted a one-year
ethnography in a molecular biology laboratory
at the West Coast university. Additionally, I
interviewed 52 faculty members, postdoctoral
researchers, and graduate students.
I chose psychology because, unlike other
social scientists, most psychologists conduct
their research in laboratories. This provides
an important benefit to the ethnographer who
finds only silent, solitary work in most social
scientific fields (Camic et al. 2011). Previous
research has investigated social scientific
research through peer review panels (Guetzkow, Lamont, and Mallard 2004; Lamont
2009; Lamont and Huutoniemi 2011), conferences (Gross and Fleming 2011), and retrospective accounts of discovery (Koppman,

Cain, and Leahey 2015). While these studies


have expanded our knowledge of how social
scientific research is presented and evaluated,
they cannot address questions of in situ
production.
Moreover, psychology represents a good
contrast because the field has modeled itself
on the natural sciences far more aggressively
than have other social sciences (Danziger
1990). Psychologists use experimentation as
their primary method and thus cannot be
accused of lacking empiricism or an experimental method (Collins 1994).
Although I will be mostly contrasting the
two psychological subfields with molecular
biology, social psychology and developmental psychology differ from each other in some
key respects. Specifically, labs for most
social psychologists are little more than social
affiliations and generic spaces where subjects
can fill out paper surveys or participate in
simple behavioral experiments. Labs in developmental psychology, on the other hand, are
more similar to natural science labs, because
they are organized around relatively permanent pieces of experimental technology.
While I address the distinction between these
psychological subfields at a few points, I
focus mainly on what distinguishes psychology labs from molecular biology labs.
The psychology lab is headed by a psychologist who primarily sits in an advisory role.
Postdocs and graduate students are usually the
engines of empirical work. They develop and
execute experiments that further some aspect of
the professors research program. Many of my
observations come from lab meetings in which
the professor, postdocs, graduate students, and
interested undergraduates would meet for an
hour or two to present and discuss research the
lab was producing. These are highly concentrated scenes of interaction focused on transitioning messy and ambiguous bench results
into polished arguments.
In two of my three developmental psychology sites, I participated fully in lab activities,
including recruiting subjects, setting up
experiments, coding, and running subjects. In
addition to lab meetings, I also attended a

Downloaded from asr.sagepub.com at EM LYON on December 10, 2015

American Sociological Review 80(6)

1206

series of one-on-one weekly meetings where


a faculty member met with individual graduate students and postdocs to discuss the progress of ongoing projects.
For reasons that will become clear, social
psychology runs fewer in-house laboratory
studies. Thus, most of my ethnographic evidence for social psychology comes from
weekly laboratory meetings. However, in
addition to interviews, I toured the offices
that compose the social psychology lab,
watched videos of in-person experiments, and
participated in dozens of online social psychology experiments.
Molecular biology is a good natural science to contrast with psychology because
they have similar laboratory organizations.
Unlike a field such as particle physics, which
has a large, hierarchical social structure
organized around a single piece of technology
(Traweek 1992), molecular biology tends to
be more horizontally structured, with several,
mostly independent projects occurring sideby-side in the same lab. Like psychology,
professors oversee the individualized projects
of graduate students and postdocs.
The molecular biology lab I studied investigated the development of neural circuits in
the retina. This involved dissecting a mouse
pup to extract the retina and then using a
piece of machinery to either measure its electrical output (i.e., neural spikes) or image the
retina to explore morphological and functional aspects of development. Typical manipulations included comparing mice at different
stages of development, comparing normal
mice to genetically altered knockout mice,
using pharmacological agents to manipulate
cell reaction, and using different types of
stimuli (e.g., light or electrical current) to
induce reaction from the retina.
The molecular biology lab was a restrictive environment because much of their
equipment was expensive, fragile, and potentially dangerous, so most of my observations
are from two-hour lab meetings held weekly
for one year. However, I was able to observe
six different researchers during routine lab
work to gain familiarity with their practices.

These lab visits ranged from 90 minutes to


four hours.
Both classic and newer ethnographies provide descriptions of biology labs that correspond to my experience (Knorr Cetina 1999;
Latour and Woolgar 1979; Lynch 1988; OwenSmith 2001; Rosenberger 2011). This article
replicates many findings from this literature.
Instead of attempting to provide a full ethnographic account of the lab, however, I emphasize the practices that differ most strikingly
from what I observed in psychology labs.

Onwards and Upwards or


Back to the Bench?
A fundamental distinction between psychological and biological research was highlighted in a pair of lab meetings I attended on
the same day in February 2013. In the first,
Beth, an advanced graduate student in social
psychology, was presenting data from an
experiment designed to investigate the relationship between the feeling of awe and
humble behavior. To do this, she had pairs of
subjectsone of whom either watched a
video designed to elicit feelings of awe or a
control videogo through three sets of tasks
that took about 10 minutes in total. Each subject filled out a set of surveys both before and
after the experiment. In addition, Beth collected basic physiological data like blood
pressure and heart rate. By the time she presented, she had collected more than eight
hours of video data, survey information, and
basic physiological data from 50 dyads.
Beths goal for the meeting was stated at
the beginning: What is the construct and
how could I measure it? She had collected
her data but needed clarification on exactly
what humility was and how it could be theoretically untangled from a concept like low
self-esteem, which can look similar behaviorally. Additionally, she was asking for suggestions regarding what to code in the videos.
The lab spent the entire meeting discussing
which aspects of the interaction should be
coded and which should be ignored. Some lab
members advised Beth to code nonverbal

Downloaded from asr.sagepub.com at EM LYON on December 10, 2015

Peterson

1207

aspects of the interaction, including things


like head nods, body posture, and facial
expressions. Others suggested coding for the
use of specific words like I and you that
may indicate whether the subject was self- or
other-focused. Still others thought Beth
should simply code the entire video to get a
global code for humility.
In the second lab meeting, Isobel, an undergraduate in the molecular biology lab, was
presenting data from a study that compared
the pupillary light reflex (PLR) in genetically
engineered and non-genetically engineered
mice. Although mostly a wet lab (i.e., a lab
that conducts research using biological matter,
chemicals, drugs, or other materials requiring
specially designed rooms), undergraduates
were sometimes given dry work because it
was easier and would not occupy more expensive and popular equipment. Isobel had run
the experiment several times and brought a
video demonstrating a trial. She turned her
laptop around toward the group and played us
a grainy, 20-second close-up of a mouse reacting to the light stimulus.
After some initial discussion to acquaint
everyone with the purpose of the experiment,
the lab began critiquing Isobels experimental
procedures. Rose, a graduate student, noted
that the mouse seemed to be moving a bit,
which would make analyzing the video difficult. She asked Isobel how she was restraining
it. Isobel explained that she was just holding it
down with her hand. Rose suggested she use a
baby sock to subdue the mouse. Isobel gratefully accepted this advice after admitting that
the mice were hard to control because they
jump around and nibble. Another graduate
student asked how much light Isobel was
using to stimulate the PLR. After she told him,
he suggested that greater intensity might produce a clearer response. The lab head told
Isobel that she might also achieve better
responses using younger mice and gave her
instructions on how to force open the eye lids
of mouse pups. Isobel wrote down each suggestion and the conversation moved to another
students project.
The two meetings share some superficial
similarities. On the same day in February, two

researchers presented video data from experiments they had run. Both had fundamental
questions regarding what their videos meant
and both were looking for feedback from
their fellow lab members to improve their
respective studies. However, the two meetings differed in one significant aspect. After
the meeting, Beth, the social psychologist,
moved forward with her study. She used the
suggestions to develop a coding scheme and
began training research assistants to code the
50 videos. Conversely, Isobel, the biologist,
was sent back to the lab bench. She took the
suggestions she received back to the site of
data gathering to improve her experimental
technique and collect better data.
These meetings reveal a key distinction
between the two fields. The molecular biology
lab was organized around the enrichment of
data. Through the development of embodied
knowledge and the introduction of new research
technologies, molecular biologists continually
changed the conditions of their research by
creating and stabilizing new manipulations.
This resulted in an ongoing focus on both
improving embodied skill and introducing new
technologies. The psychologists I observed, on
the other hand, could do little to improve the
conditions of their data gathering and, therefore, benefited less from embodied technique
and cutting-edge technology.
The following sections detail the differences in both technique and technical development between psychology and molecular
biology.

Technique
Collins (1974, 2001), who popularized the
concept of tacit knowledge within science
studies, noted recently that the term has been
used to describe such diverse forms of behavior that there was a pressing need to clarify
the concept. He developed a typology that
included somatic and collective forms of
tacit knowledge (Collins 2010).
Somatic tacit knowledge is the paradigmatic form described in Polanyis (1958)
classic example of learning how to ride a
bike. Explicit instruction is insufficient to

Downloaded from asr.sagepub.com at EM LYON on December 10, 2015

American Sociological Review 80(6)

1208

teach a somatic tacit skill. Simply put, no one


learns to box (Waquant 2004), play an instrument (Sudnow 1978), or build a TEA laser
(Collins 1974) without prolonged practice
boxing, playing, and building. Certainly,
guidance is helpful, but without physical
engagement, learning will not happen. Crucially, this does not mean these behaviors are
somehow beyond mechanization. Just
because humans cannot learn some skills
through explicit instruction does not mean
that explication is impossible. The nature of
physical actions, even highly skilled behaviors, is essentially mechanical and, therefore,
explicable.
Collective tacit knowledge, on the other
hand, is akin to learning a habitus (Bourdieu
1977). If the paradigmatic example of somatic
tacit knowledge is learning to ride a bicycle,
collective tacit knowledge might be compared to learning to ride a bicycle through a
city. This involves learning the subtle rules of
appropriateness that govern specific contexts.
Bikers may be physically able to ride fullspeed through a crowd of pedestrians, but
they will elicit anger if they do so.
Unlike somatic tacit knowledge, which is
based on mechanical action, collective tacit
knowledge is rooted in the fluid dynamics of
collective judgment. This is the elusive form
of tacit knowledge that allows individuals to
be skillful social actors; to be graceful, tasteful, polite, and germane. Collins argues that it
is currently inconceivable to explicate or
mechanize collective tacit knowledge.2
Because research on the development of
scientific paradigms has been concerned with
cognitive consensus and not the practices of
scientists, the significantly different patterns
of development of non-cognitive, tacit knowledge between fields has been overlooked.
Collective tacit knowledgethe skills
involved in designing convincing experiments, framing findings, and otherwise learning how to operate successfully within a
fieldwas a concern in both molecular biology and psychology labs. However, the role
of somatic tacit knowledge was far more
pronounced in molecular biology.

Technique in Molecular Biology:


Good Hands
Biology graduate students spend their first two
years as traveling journeymen doing rotations in which they move from lab to lab to
pick up new skills and develop interests. One of
these students, Ian, entered the lab shortly after
I arrived and his experience was instructive.
The lab meeting before he arrived, Dr. Owens
told us that Ian was coming but warned that his
background was in computational neuroscience. He had never conducted experiments.
She told us, He hasnt been tested. After the
initial meeting with Ian, she felt skeptical: I
dont think he has a realistic idea about what it
takes to do experiments. She was proved correct. Ian struggled all semester and was unable
to collect much data because he simply could
not physically perform the experiments.
Later, Ian told me that when he got into the
lab, he had a very brief training session with
Blake, one of the older graduate students:
So, he initially trained me on how to do it. I
dissected some mice earlier on another rotation and the very first time I did it he was
like, Oh, alright. It takes a nice touch and
he expected me to just destroy it and I got it
out relatively nicely but it was really kind of
a sick joke on me because that beginners
luck just didnt persist.

After his initial success he began to have


problems. Day after day, Ian continued to go
through the hour-long dissection process only
to discover that the retina was not providing a
viable electrical signal.
Students typically present their rotation
project at the end of their semester in residence. Ian presented the limited data he had
collected but had to admit that I had a nice
response retina thanks to Blake because my
touch is not so good. Ian later told me that
Blake took mercy on him and did several
dissections for his project because he simply
could not get a usable recording. Although he
might have eventually learned, Ian was not
asked to join the lab permanently.

Downloaded from asr.sagepub.com at EM LYON on December 10, 2015

Peterson

1209

Molecular biology students would spend


weeks trying to learn tricky, fine motor methods. One advanced graduate student practiced
to improve her surgical technique. A younger
graduate student spent months improving her
ability to uncage cells (a process by which
the neurotransmitter glutamate is precisely
released onto cells using a laser). In addition
to rotations, students would often visit other
labssometimes in other states or countriesto learn new techniques.
Having rare or excellent somatic tacit
knowledge was valuable in the biology lab.
One graduate student was known for an ability to capture excellent images. Her name
ended up on many papers that came out of the
lab. A postdoc was chosen specifically
because he promised to get one of their
microscopes to stimulate cells and image at
the same time, which other members of the
lab had tried and failed to do.
The importance of somatic tacit knowledge was evidenced in the way lab members referred to hands. At one point,
Dr.Owens told the lab that a pair of
researchers would soon be visiting. She
said, Theyre both technically outstanding,
like, building things. Itll be good to have
another pair of hands. In another case, Dr.
Owens was talking with Natalie, a graduate
student, about an undergraduate who had
applied to their PhD program. He had a
neurological disorder that reduced his fine
motor coordination. Previously, he had conducted some research using another student
as a surrogate and thought he could continue using that system in Dr. Owens lab.
She was dubious.
Dr. Owens: Could you imagine doing work in physiology with someone else being your hands? Its
bizarre.
Natalie: Does it work?
Dr. Owens: Im not quite sure it would work.
Natalie: You could get really mad at your hands, at
least.
Dr. Owens: [laughing] I know! I already get mad at
my hands but theyre my hands. I couldnt imagine yelling at some student, Get that cell! I cant
believe you didnt get that cell!

He was not invited into their lab.


This focus on good hands supports previous research on the role of somatic tacit
knowledge in both physics and biology labs
and suggests that physical skill remains crucial in these fields despite ongoing technological developments. Variously labeled
good hands (Shapin 1989), lab hands
(Doing 2004), and golden hands (Fujimura
1988), this line of work highlights the importance of embodied knowledge in diverse scientific contexts. However, the necessity of
technical skill did not transfer to the psychological subfields.

Technique in Developmental
Psychology: Warm Bodies
In the molecular biology lab, the term hands
was used to denote the high levels of somatic
tacit knowledge necessary to dissect animals,
build equipment, and perform experiments. In
contrast, hands is colloquially used as a
synecdoche for generic, unskilled laborers
(e.g., all hands on deck). In the developmental psychology labs, this is the role that
many members have. For instance, because
running experiments on infants requires
workers to help schedule subjects, babysit
siblings, and clean up the lab, one psychologist frequently referred to her ongoing need
for warm bodies to keep the lab running.
Because of the need for warm bodies, I
was often recruited to work in the developmental psychology labs. Although the physical skills needed to perform these tasks varied,
none could be deemed challenging. For
instance, when one of the coders did not show
up for an experiment, I was enlisted. Dr. Parker
explained my role in about 15 seconds. I was
to look at the monitor and press a button on a
computer keyboard (that someone had helpfully taped a happy face onto) when infants
were looking at the stage and release the button when they were not. That was the totality
of my role.
At another lab, I got the chance to actually
run an experiment. The task involved hiding

Downloaded from asr.sagepub.com at EM LYON on December 10, 2015

American Sociological Review 80(6)

1210

two balls in a set of buckets and seeing if the


child (this time, a 17-month-old) could infer
which bucket the ball was in. Learning how to
do the experiment was more challenging than
coding but did not involve any special skill.
Although my first trial failed because I did
not follow protocol perfectly, the researcher
told me the second attempt was perfect. My
physical skill in that domain had plateaued
after less than an hour of practice.
Even the creation of the experimental
equipment is not a particularly specialized
skill. Dr. Parker told me that her husband, a
non-psychologist, had built all of her stages
during a period of unemployment. Many of
the props used for stimuli were purchased
from toy companies. For props that needed to
be specially created, there was a bookshelf
full of crafting items.
Importantly, there was marked variation
between researchers in their skill in handling
children. Some seemed to be uncomfortable
with children and spoke primarily to their
parents. Others were far more at ease and
could quickly establish rapport. In some
ways, this appears to be similar to the differences in dissection skill between Ian and
Blake. However, this is a facile comparison.
While one may improve being good with
kids by learning various techniques, the
researchers personality was an undeniably
important element. Warmth and extroversion
were especially crucial. More to the point,
however, even researchers who were the most
skilled at dealing with children did not control them or reliably manipulate them. It is
significant that none of the labs in developmental psychology offered any explicit training for working with young children.

Technique in Social Psychology:


Scientists without Hands
While the role of technique is reduced in
developmental psychology, it is completely
irrelevant to the majority of social psychology. I asked several social psychologists if
they had any technical skills. A postdoc

whose work has been discussed in the New


York Times, PBS NewsHour, and other popular media outlets answered no and added, I
dont think I got trained in anything. One
thing I got trained to do was to think like a
social psychologist and learn how to write
papers. Others responded by telling me they
had good organizational or people skills.
Social psychologists rarely get training
meant to increase their somatic tacit knowledge. They learn the habitus of their field, the
collective tacit knowledge that will allow
them to be full members of their community.
They learn to think and write papers like
social psychologists. They develop statistical
skills and some may become excellent at
designing novel experiments. But these skills
do not produce reliable manipulations. Unlike
molecular biology, where technical skill is
vital, or even developmental psychology,
where unskilled bodies are still an important
part of the research apparatus, the research
practice of social psychologists is largely
disembodied.
Perhaps the clearest evidence that social
psychology is characterized by disembodied
practice is its growing use of online experiments (Buhrmester, Kwang, and Gosling
2011; Economist 2012; Mason and Suri 2012;
Paolacci, Chandler, and Panagiotis 2010).
Using platforms like Amazon.coms Mechanical Turk (MTurk), researchers gain access to a
large, geographically diverse population. This
method is attractive because it is far less
expensive and easier than recruiting live subjects, and it provides a population that is more
representative and thus not limited to the subjects psychologists have traditionally relied on
(Henrich, Heine, and Norenzayan 2010).
However, by embracing this remote source
of data-gathering, social psychologists have
committed themselves to filtering all aspects
of their experiment through the digital
medium of the Internet. They lose all hands
on access to the object of their inquiry.
Greta, a graduate student in social psychology, was running a study on MTurk that
involved subjects reacting to a recording of a

Downloaded from asr.sagepub.com at EM LYON on December 10, 2015

Peterson

1211

laugh. She was unhappy that some subjects


might be listening to the laugh on cheap laptop speakers in a loud room while others may
be listening through expensive headphones,
but she acknowledged there was little she
could do about it. When she presented her
data to the lab four months after our exchange,
she did not mention these concerns.

Technology
Embodied skill is not unique to researchers in
the natural sciences. Ethnographers must
learn to behave normally so as not to disturb
natural group behavior. Quantitative sociologists learn how to navigate statistical computer programs. Outside of academia, many
vocations require workers to learn a sophisticated set of physical skills.
What differentiates embodied knowledge
in molecular biology from these other
domains is that existing somatic tacit knowledge is continually given new power through
technological innovation. In molecular biology, the body serves as an all-purpose tool in
a larger system that regularly incorporates
new research technologies. Skills like dissection and microscope technique provide the
necessary conditions for the introduction of
new technologies.
For instance, the development of genetically engineered mice occurs outside the lab
but has profound implications for a labs
research, because the mice can be specially
designed to have biological properties that
allow, for instance, for better imaging. Superior imaging allows researchers to witness
cellular interactions that were previously
invisible, and this greater vision, in turn,
gives them reason to try new manipulations
they would have been unable to evaluate
before. However, embodied skill is vital all
along this process. Introduction of these mice
would be impossible if not for the embodied
skill of animal husbandry experts, and genetic
manipulation would be irrelevant if it were
not for the existence of skills that allow
researchers to dissect retinas, maintain cells,
and produce images.3

Technology in Molecular Biology:


The Wild West
The material culture in the molecular biology
lab was characterized by an elaborately built
set of rooms containing pieces of research
equipment. These included stations where
animals were anesthetized and decapitated
and others where their retinas could be dissected and stored in chambers that maintained
their viability. Freezers stored dyes that allow
cells to appear fluorescent and viruses that
prevent specific cellular functions. There was
a wall of beakers and a station with a dozen
handheld pipettes that were calibrated to distribute exact quantities of liquid. One station
looked like it belonged in a high school shop
class: on a wall next to a drill press and a band
saw were hundreds of tiny, plastic drawers
containing nuts, bolts, screws, nails, and other
fasteners used to build and manipulate
equipment.
But all of these material technologies are
mere accessories to the actual sites of data
collection in molecular biologythe rigs.
A rig is a colloquial expression used in labs
to refer to a microscope and its accoutrements. The lab had seven working rigs and
they were in the process of building an eighth
from scratch.
Although the rigs differed in their specific
function, they were all designed to capture
data from tissue samples. In every rig, tissue
samples were the focal points for several
coordinated but distinct technologies. One of
the more common setups included optics,
which was composed of the objectives
(lenses), filters designed to allow only certain
wavelengths of light to pass through, and
cameras that captured still pictures and video;
a stimulation system, which elicited responses
from retinal neurons using light from specific
wavelengths; the thin, wire electrode that
touched the cell and recorded its electrical
activity; and a perfusion system that circulated nourishing liquid around the tissue sample to keep cells alive.
Each aspect of the rig is dependent on
additional support technologies. For instance,

Downloaded from asr.sagepub.com at EM LYON on December 10, 2015

American Sociological Review 80(6)

1212

the electrode was held in place by a mechanical arm controlled by the manipulator, a
metal box with a digital readout and three
dials controlling the electrodes movement in
X, Y, and Z dimensions. Because the cells are
sensitive to trauma, the manipulator allows
researchers to make movements far more
subtle and exacting than an unaided human
could accomplish. The electrode itself sends
signals to an amplifier that magnifies the
minuscule electrical activity the tissue produces. Additional software then isolates the
signal coming from the cell of interest from
neighboring cells.
The material culture of the molecular biology lab was not only highly elaborated, it was
constantly in development. Besides the rig
that was being built while I was there, another
had been built shortly before I arrived, and
every week they discussed (and frequently
purchased) new dyes, viruses, filters, and
genetically modified mice. When I asked
Sarah about the rapid development of research
technology in their field, she told me, The
questions that we can ask are limited by the
ways that we ask them. If we have the latest
and greatest technology, we can ask more
sophisticated questions. However, this creates a culture of volatility. As Blake explained,
All the tools are advancing at the same
time. He later told me that their field pushes
hard even if we dont know where were
going and compared the constantly receding
technological frontier to the Wild West.

Technology in Social and


Developmental Psychology: The
Settled Lab
Knorr Cetina (1999:35) wrote that the social
scientific lab does not, as a rule, involve a
richly elaborated spacea place densely
stacked with instruments and materials and
populated by researchers. . . . The lab is a virtual space and, in most respects, co-extensive
with the experiment. Although both social and
developmental psychology labs were, in general, less elaborated than the biology lab, there
were important differences between the two.

Knorr Cetinas comment can justifiably be


applied to social psychology labs. As evidenced by the increasing use of online experiments, much social psychological research is
not confined to a specific space. Moreover,
the desire for more naturalistic data often
leads social psychologists out of the lab. For
instance, one study attempted to elicit pride
from undergraduate subjects by walking them
to and pointing out important landmarks at
their university. Yet, even when social psychological studies do take place within the
lab, little is specific to the lab-space. It is
merely a room where the researcher can set
up some chairs, a table, and a video camera.
Beths awe experiment, discussed earlier,
asked its two subjects to sit next to each other
and follow instructions on a clipboard while
being video recorded.
Like social psychology labs, psychologists
who do research on toddlers and children
sometimes create a new lab space for each
experiment. The necessary elements of such
an experiment are (1) a subject, (2) an experimenter, (3) a distraction-free space, (4) a toy
that functions as the stimuli, and (5) a video
camera to capture a record of the interaction.
In one experiment, a researcher hid a toy in a
box with 12 drawers to see if the child could
remember the placement after some tasks.
This did not require an elaborated space.
However, with younger children, experimental options are reduced and developmental psychologists tend to use standard
methods. Because infants cannot understand
instructions and are highly distractible, most
cognitive psychology on infants utilizes
methods that rely on measuring the infants
looking time. Because one of the developmental labs only studied children up to
12-months-old, their two experimental rooms
were designed around permanent pieces of
research equipmentpuppet stages with outward-facing cameras built into them. Different experiments altered the action on the
stage, but the stage itself remained unchanged.
Although they inhabited more elaborated
spaces than social psychologists, developmental psychology labs did little evolving.

Downloaded from asr.sagepub.com at EM LYON on December 10, 2015

Peterson

1213

Overall, the technologies used by psychologists included consumer-grade audio


and visual equipment for recording subject
responses, surveys, online experiments,
stages, and props. These technologies are all
united by the fact that they are low ceiling
technologies. Getting good enough equipment is relatively easy to achieve and inexpensive. In contrast to molecular biology,
where an ongoing integration of cutting-edge
technology provided opportunities for new
types of manipulations, technological
improvements yield little in many psychological fields. The result of this is a much
more stable material culture in the lab. For
instance, psychologists want audio recording
technology that is good enough to clearly
hear responses, but they do not require the
most advanced audio technology. Even
though it is possible to purchase equipment
that can record subtle contours of the subjects voice, that level of detail is unnecessary
for their aims.

Scientific Progress
as the Evolution of
Manipulation Regimes
The argument that scientific communities are
primarily organized around research technology is not new. As Shapin and Shaffer
(1985:152) argue, Boyless greatest innovation was not his air pump but the air pumps
role in reorganizing the social collective
around the production of experimental findings, mobilized into matters of fact through
collective witness. The air pump may have
been an ill-designed piece of technology, but
it translated a scientific controversy from an
abstract, theoretical disagreement into a concrete problem of technological function.
Later, Collins (1994) would use this argument to explain why some fields, like sociology, had trouble becoming high-consensus,
rapid-discovery science. By integrating selfgenerating genealogies of research technologies (Collins 1994:162), the natural sciences
were able to escape the type of intractable

oppositions that characterize the history of


philosophy (Collins 1998). The ongoing
introduction of new technologies led some
sciences to develop more dynamic intellectual communities where, instead of languishing in stagnant debates, researchers were
consistently abandoning old controversies in
order to get to new ones (Collins 1994:160).
However, this answer is incomplete. The
belief that technology creates scientific unity
simply by being an object of collective focus
leads to the same Mertonian overemphasis on
social organization and consensus. For
instance, it is unclear why a debate soon
passes into the realm of consensus (Collins
1994:161) when a new technology is introduced. If researchers simply abandon old
controversies to chase new technology, what
distinguishes this from the desultory paths of
the social sciences, fields often accused of
being faddish? Conversely, if technology
proves decisive in settling controversies, then
how is this different from any positivist
account of knowledge accumulation in the
natural sciences?
Collins fails to provide a compelling reason why rapid-discovery sciences have been
able to achieve high consensus but the social
sciences have not. On one hand, he concedes
that some areas of social science simply cannot be technologized, and hence cannot be
turned into the rapid-discovery mode (Collins 1994:174). However, he suggests that
conversation analysis and sociological artificial intelligence are two areas that show
promise in becoming rapid-discovery sciences. Yet, beyond the mere presence of technology, he does not clarify what makes
rapid-discovery science possible in these
areas and not in others.
The ultimate value of embodied knowledge and technology is not in their capacity to
focus attention but their ability to change the
very conditions of data collection. For
instance, Constance, a graduate student in
biology, presented some early findings from a
series of experiments that looked at the role
of ganglion cells during retinal development.
She pulled out a single piece of paper and

Downloaded from asr.sagepub.com at EM LYON on December 10, 2015

American Sociological Review 80(6)

1214

Figure 1. Iterative and Non-iterative Data Collection

unfolded it on the table. The lab gathered


around a scatter plot with dozens of points
representing neural activity (spikes) stimulated by light. She was not sure what these
data meant. First, she was not certain where
her cells were coming from because her sample was taken from a developing mouse and the
retina structure changes early in the lifespan.
Second, she was not sure what the data pattern was telling her. She made circles with her
finger over two parts of the plot and said, I
can kind of see two clusters, but admitted
she was not sure if these (pointing to spikes
earlier in time) turn into these (the rough
cluster of spikes occurring later) or if the two
were unrelated.
The head of the lab then began asking
questions about the students microscope
technique. Another graduate student asked if
she could get clearer data using a fluorescent
dye that enhances cell imagery. The discussion ended with Constance agreeing to go
back to the bench to try different microscope
techniques and the new dye to see if she could
get more interpretable data. Like Isobel, the
undergraduate doing a study of PLR, Constance was given suggestions for altering her
embodied practice and introducing new technology and was sent back to the bench to
improve her data.
What separates molecular biology from
psychology is that the development of manipulation regimes is a central goal in the former

and a peripheral interest in the latter.4 Molecular biologists are focused on enriching their
data. They are engaged in an iterative process
in which they conduct limited data gathering
followed by discussion and critique that, in
turn, is followed by a trip back to the bench to
change some feature of the experiment (see
Figure 1).5
Through repeated iterations, technique,
technology, and object become interactively
stabilized, creating a new surface of emergence (Pickering 1995). The conditions of
research have literally been altered. The
development of new skills and, relatedly, new
technologies, changes the shape of this surface and promises new horizons of emergence. The remainder of this article looks at
differences in this process between psychology and molecular biology.

Bench-Building at the Frontier


The scientific benchthe site of data collectionis continually transformed through
the application of new embodied techniques
and the introduction of new research technologies. I thus label this process benchbuilding (see Figure 2).
Bench-building is the ongoing development and refinement of manipulation regimes
that change the conditions of research. Scientists create new ways to make things happen
and produce some predictable effect. As a

Downloaded from asr.sagepub.com at EM LYON on December 10, 2015

Peterson

1215
I dont get it.
The y-axis on [graph] J is still making my
head hurt.
What the hell is this plot?
Its all going to depend upon density.
Whats density?

Figure 2. Bench-Building

They shock them [a specific subtype of


retinal cells], which Ive never heard of.

manipulation gets refined, it changes the possibilities of research. Rather than being just
an outcome, a stable manipulation can become
integrated into the research apparatus and be
used to produce new outcomes. In this way,
the research frontier is altered.
In contrast to black boxing, which
describes the way scientific and technical
work is made invisible by its own success
(Latour 1999:304), bench-building is characterized by hyper-visibility. Technique and
technology are made into objects of scrutiny
as scientists wrangle with early manipulations
that are often ambiguous and unstable. Many
findings are presented to the lab as mysteries
rather than answers. Unstable but intriguing
outcomes are made the object of scrutiny with
the goal of simply finding some solid ground,
some way to produce the outcome with acceptable regularity. As Cole (1992) would predict,
I found the research frontier in both biology
and psychology characterized by low levels of
consensus. However, molecular biologists
engaged with this ambiguity to a much greater
degree than did the psychologists.
For instance, every six weeks or so, the
molecular biology lab would hold a journal
club where they would review a recent article. The first time I attended one, I was
shocked that they could spend two hours discussing a five-page article. However, this
time investment became understandable as I
came to see how even experienced members
of the lab struggled to understand cuttingedge work from other labs. The following
quotes are all taken from the head of the lab
during my first journal club:

That distribution doesnt look normal to


me. Its a peaked distribution but it doesnt
look normal.
I dont know enough about the colliculus.
Are there really no pathways between them?
No septum?
If they have more information, why show
less information?

This difficulty was not considered a mark


against the paper, which the lab head called
awesome. The researchers in that article
had been able to do something completely
new and there was no expectation that the
article would be fully comprehensible.
Ambiguity is an unavoidable aspect of a
bench-building process that is based on technical ingenuity and researchers physical skills.
As Porter (1995:1516) explains, In the early
life of a new technique, when it is still on the
cutting edge, personal contact will most often
be crucial for its spread to other laboratories.
Indeed, this may be just what cutting edge
means in experimental science. But experiments that succeed, again perhaps by definition, will not long remain in the domain of
intricate craft skill and personal apprenticeship. Because cutting-edge research pushes
into uncharted territories using new techniques
that are not well explicated, they are sources of
both inspiration and frustration to their audience. However, only results that remain tethered to particular researchers or labs come to
be seen as deeply dubious. Inexplicability is a

Downloaded from asr.sagepub.com at EM LYON on December 10, 2015

American Sociological Review 80(6)

1216

defining characteristic of the research frontier,


but it is only the first stage in the bench-building process.
Unlike most psychologists, molecular
biologists returned to the bench to engage
with this ambiguity because, even in its inexplicability, it pointed toward a new horizon of
control and prediction. If skills can be physically mastered or new technologies introduced to create or stabilize new phenomena,
then there is hope that the process can be
explicated, standardized, and, most ambitiously, mechanized.
Explication of somatic tacit knowledge is
an ongoing concern in molecular biology
labs. I asked Dr. Owens how molecular biologists evaluate work when they struggle simply to understand what was done. She
explained that consensus builds through
between-lab communication. She mentioned
a situation in which her lab had a genetically
engineered mouse that did not produce electrical waves in their lab but did in some others. This sparked a situation whereby the tacit
knowledge embodied in the process was
explicated to standardize the reactions across
labs: There was a lot of back and forth
between the labs saying, What were the solutions you used? How long is your dissection? How much light were they exposed
to? Through this communication, they
changed a number of procedures and were
finally able to replicate the waves.
This is not to say that somatic tacit knowledge is always on a path toward mechanization.
This might actually be somewhat of a rarity.6
Roses suggestion to use a baby sock to restrain
the mouse instead of a researchers hand is a
simple example that shows how a task can be
externalized in a piece of, albeit low-tech, technology. The back and forth between Dr. Owens
lab and the external labs allowed for enough
explication to standardize procedures across
labs. However, for many complex skills, there
is little evidence of impending mechanization.
For instance, although dissection procedures
are mostly standardized across researchers,
there has been no mechanization. Knife-work
remains a vital skill.

Moreover, not all unstable frontiers can be


settled. Returning to the bench does not guarantee success. During a presentation, Julian, a
graduate student, showed some data that
could be explained by two hypotheses. To
decide between them, a neurotransmitter
would have to be completely blocked to see if
it was causing the reaction in question. Yet,
the technical ability had not been developed.
The lab head lamented that the competing
hypotheses could not be tested: The tools
arent there. The drugs arent perfect and the
[genetically altered mice] arent perfect.
Without the proper tools, there was simply no
way to improve the data.

Limits to Bench-Building in
Psychological Science
Bench-building is not completely absent in
psychology laboratories. I participated in several informal sessions where researchers
would try out new methods or experiments
on each other to make sure they elicited the
desired response. One advanced graduate
student in cognitive psychology had a hypothesis about word order affecting fluency in a
reading task. He read it to the group to get
their impression. It did not create the effect he
wanted so he decided to develop a better
stimulus. In another case, a graduate student
was demonstrating a memory task for toddlers involving cups with toys in them taped
to a Lazy Susan. However, she told her
adviser that the infants were getting distracted
by the tape and would not engage with the
experiment. The advisor said, Let me be a
baby. How does this work? They then went
through the experiment together slowly and
developed a number of suggestions, including
putting a more desirable toy in the cups and
replacing the distracting tape with velcro.
However, this sort of bench-building is
severely limited in many psychological fields.
Word order can only be rearranged in a few
ways. There is only so much you can do to
hold a toddlers attention. Repeated trips to
the bench offer diminishing returns. Logical
and clever experimental designs are important

Downloaded from asr.sagepub.com at EM LYON on December 10, 2015

Peterson

1217

in every field, but only some fields benefit


from repeated iterations of data collection.
The deciding factor is whether engagement
will yield greater purchase on the research
object or not; that is, whether researchers can
either develop their skill or introduce new
technologies to produce better data.
In experimental social sciences like psychology, powerful manipulations are generally rare. The recent debate regarding the
validity of social priming methods is telling
(Kahneman 2012; Young 2012b, 2012c).
When Doyen and colleagues (2012) failed to
replicate Barghs highly cited priming study
(Bargh et al. 1996), Bargh dismissed the replication and subsequent news coverage as
products of pay-as-you-go publications and
superficial online science journalism (Bargh
2012). This quickly descended into a squabble
between Bargh, the authors of the replication,
science bloggers, and online commentators.
For our purposes here, it is irrelevant whether
Bargh or his critics are right. An op-ed in the
New York Times about the dust-up points out
that the effect sizes in Barghs article were not
very large to begin with, and thus failed replications should not be surprising (Satel
2013).
What is significant, however, is that Bargh
defended his work by arguing that the replicators had introduced critical changes (Bargh
2012) to the experiment, like slightly different subject instructions and priming procedures. This type of critique against a failed
replication is a familiar strategy (Collins
1985). Naturally, experimental paradigms on
the bleeding edge of science are fragile. At
the research frontier, tacit knowledge is necessary and replications may be difficult to
achieve.
However, the studies in question are not at
the bleeding edge of a research frontier: they
were published almost 20 years ago. Why
have these methods not been refined? Why
have new technologies or methods not provided new vistas and united the lab with other
sciences and outside institutions? What makes
bench-building harder to achieve in the
human sciences?

The answer has both ethical and ontological dimensions. On one hand, the ethics of
experimenting on human subjects limits the
types of strategies social scientists can use.
The development of embodied knowledge in
science involves an intimate relationship
between researchers, tools, and objects of
study, and anything that constrains the evolution of this relationship will constrain benchbuilding. Certainly, there are good reasons for
this oversight, but it is not surprising that
some of the most famous and illuminating
studies in social sciencefrom the Milgram
and Stanford Prison Experiments to Humphreys Tearoom Tradewere also the most
ethically questionable. The very methods that
tell us the most about human behavior are the
ones that are the most immersive, invasive,
and manipulative.
The effects of these restrictions were especially evident in psychology labs studying
infants and toddlers. Due to legal and ethical
protections, researchers were limited in their
control over subjects. In many experiments,
children were expected to remain seated on
their parents lap looking at a stage; researchers then directed their attention toward the
stimulus so their faces could be recorded for
coding. However, the children would often
get antsy and begin to stand on their parents
legs or lean out of frame. Because the experimenters will was mediated through the parents hands, flawless adherence to protocols
was sacrificed for the comfort of the child.
Researchers had few options to remedy this
so they simply adjusted. They would give the
parent and child breaks in the middle of an
experiment if the subject was getting fidgety.
When the child leaned out of frame, the coders could no longer see the face to measure
looking-time. Yet, they continued coding
based on the position of the body or some
other cue. Because control of the subject was
limited, environmental control was frequently
compromised as well.
However, even if social scientists could
attain better experimental control, they would
still face an ontological challenge: they deal
with objects of study that are abstract and

Downloaded from asr.sagepub.com at EM LYON on December 10, 2015

American Sociological Review 80(6)

1218

multivalent. As Peirce (1955:376) explained,


Men who are given to defining too much
inevitably run themselves into confusion in
dealing with the vague concepts of common
sense. For instance, power is a phenomenon that occurs between nation-states and also
between children on a playground. One instantiation is not purer than the other. Instead,
understanding power involves the capacity
to draw out commonalities between these
cases. In the physical sciences, on the other
hand, objects of study are usually better understood when reduced to their components.
This results in a confounding research situation for social scientists. The very methods
that bring physical sciences closer to their
object of study pull social scientists farther
away from theirs. Researchers in the Weberian tradition argue that social scientific concepts take their meaning from specific
historical and cultural circumstances (e.g.,
Habermas 1967; Winch 1958). Abstracting a
concept of social relevance from its context
degrades and deforms it.
For instance, one of the infant cognition
labs was running experiments looking at
infants ability to discriminate between groups
of people. Laura, an advanced graduate student, was investigating whether infants based
their perception of group-ness on similar
appearance or collective action. Clearly, this
work had implications for understanding
things like the development of racial attitudes, an inspiration the researcher acknowledged. However, instead of using images of
actual people (or even realistic representations of people) as measures of difference,
she used cartoon objects made of red circles
and yellow triangles with faces on them:
these cartoons were easy to standardize and
maximally different to make them easier to
distinguish. The initial idea was translated
into a doable project through standardization
and control. But it also became more abstract
and farther removed from the rich concept
that motivated the investigation.
Although challenging research conditions
are not unique to psychology, the ethics of
human subjects protection and the complex

ontological status of cultural and psychological objects constrain bench-building in fields


that take human thought and behavior as their
primary research object. However, it is important not to conflate these two issues. Research
ethics are a product of historically situated
epistemic cultures; they may become more or
less constrictive under different circumstances. Limitations to access or control in
human science can, in principle, be overcome. New methods that collect biological
measures or social media information push
these boundaries. However, it is less obvious
how ontological constraints can be overcome.
Historically and culturally situated concepts
may simply provide an unsound foundation
for the building of stable manipulation
regimes.

Conclusions: Scientific
Growth as Creative
Destruction
Scholars in the philosophy of social science
have provided a justification for social science based on hermeneutics (Habermas 1967,
1968; Taylor 1985) and interpretation (Geertz
1973) in contrast to the objectification used
by natural sciences. The distinction between
experimental and interpretive fields has
been cited publicly by social scientists to
argue that their ultimate value and validity
emerges from a different source than the natural sciences (e.g., Gieryn 1999).
However, despite this conceptual critique
of treating humans as natural objects, the
experimental method remains the basis for
much of the research in social science (Jackson and Cox 2013). This raises important
questions about whether treating human
beingswho are laden with biography, history, and cultureas natural objects affects
the epistemic culture of the experimental
social sciences. If humans do present particular problems to the experimenter, how does
this change the field at the level of practices?
Ultimately, these questions can only be
addressed through comparative research that

Downloaded from asr.sagepub.com at EM LYON on December 10, 2015

Peterson

1219

includes both social and natural sciences.


Although this study does not provide a definitive answer, it offers suggestive results that
are supported by previous research.
Through a comparative ethnography of
five psychology labs and one molecular biology lab, I highlighted profound differences in
embodied knowledge and technological integration. Through the development of technique and the integration of new technologies,
molecular biologists are able to create and
stabilize new phenomena and, in doing so,
change the very conditions of their research, a
process I labeled bench-building. Psychologists, on the other hand, are both ethically and
ontologically constrained in bench-building.
This is a different picture of scientific
maturity than previous theories have offered.
Despite being at odds with each other on
many points, macro/quantitative and constructivist theories share a focus on consensus. Both argue that natural sciences achieve
consensus that, in turn, produces more integrated social organizations. Quantitative
research in the Mertonian tradition attributes
this to the cohesiveness of cognitive consensus, whereas the constructivist tradition
attributes it to the closure of controversies.
My observations paint a different picture.
Rather than a picture of high consensus and
cognitive integration, the molecular biology
lab was a scene of constant, sometimes chaotic, change. Where bench-building is a possibility, it is a necessity. When the possibilities
of research are constantly receding, labs funnel their resources into developing the techniques and technology to maintain their
position along the cutting edge. Consensus
and stability are sacrificed under the faith that
ambiguous, local research will eventually
solidify. But, by then, they have already
moved to the next frontier. The value of creating new manipulation regimes drives molecular biologists to perpetually transfigure their
working conditions. Bench-building is a type
of Schumpeterian creative destruction that
transforms practice from within.
On the other hand, the two psychological
subfields studied here maintained relatively

unchanging methods, practices, and theories


rather than a flux of competing paradigms
and controversies. A social psychology postdoc told me that theres nothing that I do that
couldnt have been done 60 years ago. This
is not true of all social psychology, of course.
The branch that utilizes implicit-association
tests, for instance, has shown some technological development. However, it would be
hard to imagine any biologist admitting to
this type of epistemic continuity. Because of
the ethical and ontological challenges
described earlier, social psychologists gain
little by pursuing methodological changes
that would only serve to make ones research
less intelligible to colleagues. Where benchbuilding is constrained, the importation of
new techniques and technologies represents a
disintegrating force that may be resisted.
Although psychology is somewhat unusual in the social sciences for its unity around
a set of methods and analytic standards (Danziger 1990; Porter 1995), both economics and
political science have developed highconsensus cultures despite low levels of embodied knowledge and technological integration
(Fourcade et al. 2014; Lamont 2009; Pfeffer
1993). However, fields like anthropology and
sociology remain heterogeneous. Perhaps, in
the absence of evolving manipulation regimes
that coax consensus out of the promise of new
horizons of manipulation, consensus can
occur only through political control. If so,
consensus in social scientific fields represents
a withdrawal from the creative destruction
that defines development in the natural sciences. Under these conditions, consensus
may occur as a product of social control
rather than a byproduct of bench-building.
Future research should expand these investigations beyond the fields examined in this
article. By using molecular biology and two
subfields of psychology as representatives of
natural and social science, I am unable to
definitively answer questions of how benchbuilding differs in different fields. I chose
psychology because it has largely adopted the
experimental methodology and laboratorybased social organization common in natural

Downloaded from asr.sagepub.com at EM LYON on December 10, 2015

American Sociological Review 80(6)

1220

sciences. However, science studies scholars


have repeatedly demonstrated the diversity of
epistemic and organizational forms among
scientific communities. Future research should
explore further variations within and between
the categories of natural and social. For
instance, Knorr Cetina (1999) argues that high
energy particle physics resembles semiotics in
some ways, and emerging fields like social
neuroscience blur the boundary between the
natural and the social.
One potential avenue of investigation is to
look at how embodied knowledge is distributed in different scientific fields. Vertesis
(2012) recent work on the NASA team in
charge of the Mars rover suggests that, in
complex technological systems, embodiment
may be distributed horizontally, with different
individuals somatically attuned to different
aspects of the systems function. Furthermore, in hierarchically organized laboratories
(e.g., particle physics), embodied knowledge
tends to be associated with lower status,
whereas theoretical knowledge is dominated
by those with higher status (Doing 2004;
Shapin 1989; Traweek 1992). That these sorts
of divisions of labor have not yet occurred in
molecular biology raises intriguing questions
about the manifestation of bench-building in
different settings.
Another potential area of elucidation is the
micro-interactions involved at different stages
in the bench-building process. For instance,
although the importance of data visualizations
in scientific research is frequently noted (Burri
and Dumit 2008; Latour 1990), there is much to
be learned about the specific role that data visualizations play at different stages in the iterative
loop that characterizes bench-building.
Finally, in addition to the process of benchbuilding that attempts to standardize and
mechanize new manipulation regimes, it is
important to point out that there may be
other, locally prevalent, incentives to destabilize and de-standardize (Jordan and
Lynch 1998:795). Thus, there is a need for
research that looks at how manipulation
regimes are deformed and adapted to local
contexts.

Acknowledgments
I would like to thank Jeremy Freese, Chas Camic, Gary
Fine, and the anonymous reviewers for their detailed and
insightful comments. I also want to thank the lab heads,
post docs, and graduate students who allowed me to
observe and interview them. Finally, I would like to
express gratitude to my informant AF who ensured the
accuracy of my descriptions of molecular biology.

Notes
1. Sociologists advocating theories of paradigm development have repeatedly mistaken cognitive consensus for the vital prerequisite of mature science
rather than a byproduct of research practice. This
inability to understand what separates high-consensus
social science, like economics, from physics has led
to misguided suggestions for making the social sciences more scientific through social control and
socialization (for similar claims, see Best et al.
2001; Fuchs and Turner 1986). Lamont (2009) and
Pfeffer (1993) argue that political science did just
this. The rest of this article highlights why these
sorts of strategies are flawed.
2. In practice, the distinction between the somatic and
collective forms of tacit knowledge may be less
clear than Collins suggests. Even basic somatic
skills may be infused with collective meaning (e.g.,
throwing like a girl [Young 2005]). And, although
weaving a bicycle through traffic may require different knowledge than simply learning to ride, the
development of self-driving cars indicates that this
sort of skill is not as insurmountably tacit as Collins
suggests. However, for the purposes of this argument, the basic distinction between somatic and
collective tacit knowledge is a useful heuristic.

3. This entanglement between technology and


embodiment has previously been shown in the
fields of protein crystallography (Myers 2008) and
neuroscience (Alac 2008).
4. The argument that manipulation is essential in scientific explanation has its roots in the philosophy of science (Hacking 1983; Menzies and Price 1993; Reed
2008; Von Wright 1971; Woodward 2005). These
scholars argue that the defining characteristic of science is that researchers intervene in nature and create
stable causal relationships where there were none, and
science primarily progresses with the development of
manipulation technology. In fact, some argue that it
was the emergence of manipulation technology that
enabled molecular biology to transition from a descriptive to an experimental science (Weinberg 1985).
5. Latours influential account suggests data visualizations (inscriptions) are primarily useful
as rhetorically powerful devices that facilitate a
swift transition from craft work to ideas (Latour
and Woolgar 1979:69). After this transition, the
bench space will be forgotten (1979:69; see also

Downloaded from asr.sagepub.com at EM LYON on December 10, 2015

Peterson

1221

Latour 1999:63). This leads Latour (1990) to argue


that inscriptions play the same role in the social and
natural sciences. However, data visualizations in
molecular biology are not always, or even primarily, rhetorical devices. They are also part of a feedback circuit that leads researchers back to the local,
material source of the data to enrich and improve
their manipulations.
6. One famous example of the mechanization of
embodied knowledge is the case of the Matsuchita
Electric Company, which sent engineers to learn
and mechanize the subtle, embodied techniques of
master bakers to improve their electric bread maker
(Nonaka 1991; Nonaka and Takeuchi 1995).

References
Alac, Morena. 2008. Working with Brain Scans: Digital Images and Gestural Interaction in fMRI Laboratory. Social Studies of Science 38(4):483508.
Almond, Gabriel A. 1989. A Discipline Divided: Schools
and Sects in Political Science. Newbury Park, CA:
Sage Publications.
Arsenault, Darin J., Laurence D. Smith, and Edith A.
Beauchamp. 2006. Visual Inscriptions in the Scientific Hierarchy: Mapping the Treasures of Science.
Science Communication 27(3):376428.
Ashmore, Malcolm, Steven D. Brown, and Katie Macmillan. 2005. Lost in the Mall with Mesmer and
Wundt: Demarcations and Demonstrations in the
Psychologies. Science, Technology & Human Values
30(1):76110.
Bargh, John A. 2012. Nothing in Their Heads. PsychologyToday.com. Retrieved March 29, 2013
(http://web.archive.org/web/20120309182006/http://
www.psychologytoday.com/blog/the-natural-uncons
cious/201203/nothing-in-their-heads?).
Bargh, John A., Mark Chen, and Lara Burrows. 1996.
Automaticity of Social Behavior: Direct Effects
of Trait Construct and Stereotype Activation on
Action. Journal of Personality and Social Psychology 71(2):23044.
Begley, Sharon. 2012. In Cancer Science, Many Discoveries Dont Hold Up. Reuters.com. Retrieved March
27, 2013 (http://www.reuters.com/article/2012/03/28/
us-science-cancer-idUSBRE82R12P20120328).
Bem, Daryl J. 2011. Feeling the Future: Experimental
Evidence for Anomalous Retroactive Influences on
Cognition and Affect. Journal of Personality and
Social Psychology 100(3):407425.
Ben-David, Joseph, and Randall Collins. 1966. Social
Factors in the Origins of a New Science: The Case
of Psychology. American Sociological Review
31(4):45165.
Best, Lisa A., Laurence D. Smith, and Alan Stubbs. 2001.
Graph Use in Psychology and Other Sciences.
Behavioural Processes 54(13):15565.
Bourdieu, Pierre. 1977. Outline of a Theory of Practice.
New York: Cambridge University Press.

Buchanan, Roderick D. 1997. Ink Blots or Profile Plots:


The Rorschach versus the MMPI as the Right Tool for
a Science-Based Profession. Science, Technology, &
Human Values 22(2):168206.
Buhrmester, Michael, Tracy Kwang, and Samuel D.
Gosling. 2011. Amazons Mechanical Turk: A New
Source of Inexpensive, Yet High-Quality, Data? Perspectives on Psychological Science 6(1):35.
Burri, Regula V., and Joseph Dumit. 2008. Social Studies of Scientific Imaging and Visualization. Pp.
297317 in The Handbook of Science and Technology
Studies, 3rd ed., edited by E. J. Hackett, O. Amsterdamska, M. Lynch, and J. Wajcman. Cambridge, MA:
The MIT Press
Callon, Michel. 1986. Some Elements of a Sociology
of Translation: Domestication of the Scallops and the
Fishermen of St. Brieuc Bay. Pp. 196223 in Power,
Action, and Belief: A New Sociology of Knowledge?
edited by J. Law. London, UK: Routledge.
Callon, Michel, ed. 1998. Laws of the Markets. Oxford,
UK: Blackwell Publishers.
Cambrosio, Alberto, and Peter Keating. 1988. Going
Monoclonal: Art, Science, and Magic in the Day-toDay Use of Hybridoma Technology. Social Problems 35(3):24460.
Camic, Charles, Neil Gross, and Michle Lamont. 2011.
The Study of Social Knowledge Making. Pp. 140
in Social Knowledge in the Making, edited by C.
Camic, N. Gross, and M. Lamont. Chicago: University of Chicago Press.
Carey, Benedict. 2011. Fraud Case Seen as a Red Flag for
Psychology Research. NYTimes.com. Retrieved October 17, 2012 (http://www.nytimes.com/2011/11/03/
health/research/noted-dutch-psychologist-stapelaccused-of-research-fraud.html?_r=0).
Chamak, Brigitte. 1999. The Emergence of Cognitive
Science in France: A Comparison with the USA.
Social Studies of Science 29(5):64384.
Clarke, Adele E., and Susan L Star 2008. The Social
Worlds Framework: A Theory/Methods Package.
Pp. 11338 in The Handbook of Science and Technology Studies, 3rd ed., edited by E. J. Hackett, O.
Amsterdamska, M. Lynch, and J. Wajcman. Cambridge, MA: The MIT Press.
Cole, Stephen. 1983. The Hierarchy of the Sciences?
American Journal of Sociology 89(1):11139.
Cole, Stephen. 1992. Making Science: Between Nature and
Society. Cambridge, MA: Harvard University Press.
Cole, Stephen. 1994. Why Sociology Doesnt Make
Progress Like the Natural Sciences. Sociological
Forum 9(2):13354.
Cole, Stephen, ed. 2001. Whats Wrong with Sociology?
New Brunswick, NJ: Transaction Publishers.
Cole, Stephen, Jonathan R. Cole, and Lorraine Dietrich.
1978. Measuring the Cognitive State of Scientific
Disciplines. Pp. 209251 in Toward a Metric of Science: The Advent of Science Indicators, edited by Y.
Elkana, J. Lederberg, R. K. Merton, A. Thackray, and
H. A. Zuckerman. New York: Wiley.

Downloaded from asr.sagepub.com at EM LYON on December 10, 2015

American Sociological Review 80(6)

1222
Collins, Harry M. 1974. The TEA Set: Tacit Knowledge
and Scientific Networks. Science Studies 4(2):16586.
Collins, Harry M. 1985. Changing Order: Replication
and Induction in Scientific Practice. London, UK:
Sage Publications.
Collins, Harry M. 2001. Tacit Knowledge, Trust and the
Q of Sapphire. Social Studies of Science 31(1):71
85.
Collins, Harry M. 2010. Tacit and Explicit Knowledge.
Chicago: University of Chicago Press.
Collins, Randall. 1994. Why the Social Sciences Wont
Become High-Consensus, Rapid-Discovery Science. Sociological Forum 9(2):15577.
Collins, Randall. 1998. The Sociology of Philosophies:
A Global Theory of Intellectual Change. Cambridge,
MA: Harvard University Press.
Cozzens, Susan E. 1985. Comparing the Sciences: Citation Context Analysis of Papers from Neuropharmacology and the Sociology of Science. Social Studies
of Science 15(1):12753.
Danziger, Kurt. 1990. Constructing the Subject: Historical Origins of Psychological Research. Cambridge,
UK: Cambridge University Press.
Davis, James A. 1994. Whats Wrong with Sociology?
Sociological Forum 9(2):17997.
Doing, Park. 2004. Lab Hands and the Scarlet O:
Epistemic Politics and (Scientific) Labor. Social
Studies of Science 34(3):299323.
Doing, Park. 2008. Give Me a Laboratory and I Will
Raise a Discipline: The Past, Present, and Future
Politics of Laboratory Studies in STS. Pp. 27995
in The Handbook of Science and Technology Studies,
3rd ed., edited by E. J. Hackett, O. Amsterdamska, M.
Lynch, and J. Wajcman. Cambridge, MA: The MIT
Press
Doyen, Stphane, Olivier Klein, Cora-Lise Pichon,
and Axel Cleeremans. 2012. Behavior Priming:
Its All in the Mind, but Whose Mind? PLoS One
7(1):e29081. doi:10.1371/journal.pone.0029081
Economist. 2012. The Roar of the Crowd: Crowdsourcing is Transforming the Science of Psychology. The
Economist. Retrieved February 26, 2013 (http://www
.economist.com/node/21555876).
Evans, John H. 2007. Consensus and Knowledge Production in an Academic Field. Poetics 35(1):121.
Fanelli, Daniele. 2012. Negative Results Are Disappearing from Most Disciplines and Countries. Scientometrics 90(3):891904.
Ferguson, Christopher J. 2012. Can We Trust Psychological Research? Time.com. Retrieved October
17, 2012 (http://ideas.time.com/2012/07/17/can-wetrust-psychological-research/).
Fourcade, Marion. 2009. Economists and Societies: Discipline and Profession in the United States, Britain,
and France, 1890s to 1990s. Princeton, NJ: Princeton
University Press.
Fourcade, Marion, Etienne Ollion, and Yann Algan. 2014.
The Superiority of Economics. MaxPo Discussion
Paper 14(3):126.

Fuchs, Stephan, and Jonathan Turner. 1986. What


Makes a Science Mature? Patterns of Organizational Control in Scientific Production. Sociological
Theory 4(2):14350.
Fujimura, Joan H. 1988. The Molecular Biological Bandwagon in Cancer Research: Where Social
Worlds Meet. Social Problems 35(3):26183.
Fujimura, Joan H. 1992. Crafting Science: Standardized
Packages, Boundary Objects, and Translation. Pp.
168214 in Science as Practice and Culture, edited
by A. Pickering. Chicago: University of Chicago
Press.
Galison, Peter. 2004. Images of the Self. Pp. 25794 in
Things That Talk: Object Lessons from Art and Science, edited by L. Daston. New York: Zone Books.
Gardner, Howard. 1992. Scientific Psychology: Should
We Bury It or Praise It? New Ideas in Psychology
10(2):17990.
Geertz, Clifford. 1973. The Interpretation of Cultures.
New York: Basic Books.
Gieryn, Thomas F. 1999. Cultural Boundaries of Science:
Credibility on the Line. Chicago: University of Chicago Press.
Gilbert, G. Nigel, and Michael J. Mulkay. 1984. Opening
Pandoras Box: A Sociological Analysis of Scientists
Discourse. Cambridge, UK: Cambridge University
Press.
Gouldner, Alvin W. 1970. The Coming Crisis of Western
Sociology. New York: Basic Books.
Gross, Neil, and Crystal Fleming. 2011. Academic
Conferences and the Making of Philosophical
Knowledge. Pp. 15180 in Social Knowledge in
the Making, edited by C. Camic, N. Gross, and M.
Lamont. Chicago: University of Chicago Press.
Guetzkow, Joshua, Michle Lamont, and Grgoire Mallard. 2004. What Is Originality in the Humanities
and the Social Sciences. American Sociological
Review 69(2):190212.
Habermas, Jrgen. 1967. On the Logic of the Social Sciences. Cambridge, MA: The MIT Press.
Habermas, Jrgen. 1968. Knowledge and Human Interests. Boston: Beacon Press.
Hacking, Ian. 1983. Representing and Intervening: Introductory Topics in the Philosophy of Natural Science.
New York: Cambridge University Press.
Hargens, Lowell L. 1988. Scholarly Consensus and
Journal Rejection Rates. American Sociological
Review 53(1):13951.
Hartley, James, Eric Sotto, and James Pennebaker. 2002.
Style and Substance in Psychology: Are Influential
Articles More Readable Than Less Influential Ones?
Social Studies of Science 32(2):32134.
Heath, Deborah, Erin Koch, Barbara Ley, and Michael
Montoya. 1999. Nodes and Queries: Linking Locations in Networked Fields of Inquiry. American
Behavioral Scientist 43(3):45063.
Henrich, Joseph, Steven J. Heine, and Ara Norenzayan.
2010. The Weirdest People in the World. Behavioral and Brain Sciences 33(23):6183.

Downloaded from asr.sagepub.com at EM LYON on December 10, 2015

Peterson

1223

Ioannidis, John P. A. 2005. Why Most Published


Research Findings Are False. PLoS Med 2(8):e124.
doi:10.1371/journal.pmed.0020124
Jackson, Michelle, and D. R. Cox. 2013. The Principles
of Experimental Design and Their Application in
Sociology. Annual Review of Sociology 39(1):2749.
Jordan, Kathleen, and Michael Lynch. 1998. The Dissemination, Standardization and Routinization of a
Molecular Biological Technique. Social Studies of
Science 28(56):773800.
Kahneman, Daniel. 2012. A Proposal to Deal with Questions about Priming Effects. Open letter hosted on
Nature.com. Retrieved March 27, 2013 (http://www
.nature.com/polopoly_fs/7.6716.1349271308!/sup
pinfoFile/KahnemanLetter.pdf).
Knorr Cetina, Karin. 1999. Epistemic Cultures: How the
Sciences Make Knowledge. Cambridge, MA: Harvard
University Press.
Knorr Cetina, Karin, and Alex Preda, eds. 2005. The
Sociology of Financial Markets. Oxford, UK: Oxford
University Press.
Koppman, Sharon, Cindy L. Cain, and Erin Leahey.
2015. The Joy of Science: Disciplinary Diversity
in Emotional Accounts. Science, Technology, &
Human Values 40(1):3070.
Kuhn, Thomas. 1970. The Structure of Scientific Revolutions. Chicago: University of Chicago Press.
Lamont, Michle. 2009. How Professors Think: Inside
the Curious World of Academic Judgment. Cambridge, MA: Harvard University Press.
Lamont, Michle, and Katri Huutoniemi. 2011. Comparing Customary Rules of Fairness: Evaluative Practices in Various Types of Peer Review Panels. Pp.
209232 in Social Knowledge in the Making, edited
by C. Camic, N. Gross, and M. Lamont. Chicago:
University of Chicago Press.
Latour, Bruno. 1987. Science in Action: How to Follow
Scientists and Engineers through Society. Cambridge,
MA: Harvard University Press.
Latour, Bruno. 1990. Drawing Things Together. Pp.
1968 in Representation in Scientific Practice, edited
by M. Lynch and S. Woolgar. Cambridge, MA: MIT
Press.
Latour, Bruno. 1999. Pandoras Hope: Essays on the
Reality of Science Studies. Cambridge, MA: Harvard
University Press.
Latour, Bruno. 2005. Reassembling the Social: An
Introduction to Actor-Network Theory. Oxford, UK:
Oxford University Press.
Latour, Bruno, and Steve Woolgar. 1979. Laboratory
Life: The Construction of Scientific Facts. Princeton,
NJ: Princeton University Press.
Lodahl, Janice B., and Gerald Gordon. 1972. The
Structure of Scientific Fields and the Functioning of
University Graduate Departments. American Sociological Review 37(1):5772.
Lynch, Michael. 1985. Art and Artifact in Laboratory
Science: A Study of Shop Work and Shop Talk in a
Research Laboratory. London, UK: Routledge.

Lynch, Michael. 1988. Sacrifice and the Transformation


of the Animal Body into a Scientific Object: Laboratory Culture and Ritual Practice in the Neurosciences. Social Studies of Science 18(2):26589.
Lynch, Michael. 1997. Scientific Practice and Ordinary
Action: Ethnomethodology and Social Studies of Science. Cambridge, UK: Cambridge University Press.
MacKenzie, Donald. 2006. An Engine, Not a Camera:
How Financial Models Shape Markets. Cambridge,
MA: MIT Press.
Mandler, George. 2011. Crisis and the Problems Seen
from Experimental Psychology. Journal of Theoretical and Philosophical Psychology 31(4):24046.
Mannheim, Karl. [1936] 1985. Ideology and Utopia:
An Introduction to the Sociology of Knowledge. San
Diego, CA: Harcourt.
Mason, Winter, and Siddharth Suri. 2012. Conducting Behavioral Research on Amazons Mechanical
Turk. Behavior Research Methods 44(1):123.
Menzies, Peter, and Huw Price. 1993. Causation as a
Secondary Quality. British Journal for the Philosophy of Science 44(2):187203.
Merton, Robert K. 1973. The Sociology of Science: Theoretical and Empirical Investigations. Chicago: University of Chicago Press.
Myers, Natasha. 2008. Molecular Embodiments and the
Body-Work of Modeling in Protein Crystallography.
Social Studies of Science 38(2):16399.
Nonaka, Ikujiro. 1991. The Knowledge-Creating Company. Harvard Business Review NovDec:96104.
Nonaka, Ikujiro, and Hirotaka Takeuchi. 1995. The
Knowledge Creating Company: How Japanese Companies Create the Dynamics of Innovation. New
York: Oxford University Press.
Owen-Smith, Jason. 2001. Managing Laboratory Work
through Skepticism: Processes of Evaluation and Control. American Sociological Review 66(3):42752.
Owens, Brian. 2011. Reliability of New Drug Target Claims Called into Question. Nature.com.
Retrieved March 27, 2013 (http://blogs.nature.com/
news/2011/09/reliability_of_new_drug_target.html).
Paolacci, Gabriele, Jesse Chandler, and Panagiotis G.
Ipeirotis. 2010. Running Experiments on Amazon
Mechanical Turk. Judgment and Decision Making
5(5):41119.
Peirce, Charles S. 1955. The Philosophical Writings of
Peirce. New York: Dover Publications.
Pfeffer, Jeffrey. 1993. Barriers to the Advance of
Organizational Science: Paradigm Development as
a Dependent Variable. Academy of Management
Review 18(4):599620.
Pickering, Andrew. 1995. The Mangle of Practice: Time,
Agency, and Science. Chicago: University of Chicago
Press.
Pinch, Trevor J., and Wiebe E. Bijker. 1984. The Social
Construction of Facts and Artefacts: Or How the
Sociology of Science and the Sociology of Technology Might Benefit Each Other. Social Studies of Science 14(3):399441.

Downloaded from asr.sagepub.com at EM LYON on December 10, 2015

American Sociological Review 80(6)

1224
Polanyi, Michael. 1958. Personal Knowledge: Towards
a Post-Critical Philosophy. Chicago: University of
Chicago Press.
Porter, Theodore M. 1995. Trust in Numbers: The Pursuit
of Objectivity in Science and Public Life. Princeton,
NJ: Princeton University Press.
Price, D. J. de Solla. 1970. Citation Measures of Hard
Science, Soft Science, Technology, and Non-Science. Pp. 322 in Communication among Scientists
and Engineers, edited by C. E. Nelson and D. K. Pollack. Lexington, MA: D. C. Heath and Company.
Reed, Isaac. 2008. Justifying Sociological Knowledge:
From Realism to Interpretation. Sociological Theory
26(2):101129.
Rose, Nikolas. 1998. Inventing Our Selves: Psychology,
Power, and Personhood. Cambridge, UK: Cambridge
University Press.
Rosenberger, Robert. 2011. A Case Study in the
Applied Philosophy of Imaging: The Synaptic Vesicle Debate. Science, Technology, & Human Values
36(1):632.
Satel, Sally L. 2013. Primed for Controversy. New
York Times, February 24, retrieved February 26, 2013
(http://www.nytimes.com/2013/02/24/opinion/sunday/psychology-research-control.html?_r=1&).
Savage, Mike, and Roger Burrows. 2007. The Coming Crisis of Empirical Sociology. Sociology 41(5):88599.
Shapin, Steven. 1989. The Invisible Technician. American Scientist 77(6):55463.
Shapin, Steven, and Simon Shaffer. 1985. Leviathan and the
Air Pump. Princeton, NJ: Princeton University Press.
Shwed, Uri, and Peter S. Bearman. 2010. The Temporal
Structure of Scientific Consensus Formation. American Sociological Review 75(6):81740.
Simonton, Dean K. 2004. Psychologys Status as a Scientific Discipline: Its Empirical Placement within an
Implicit Hierarchy of the Sciences. Review of General Psychology 8(1):5967.
Smith, Laurence D., Lisa A. Best, D. Alan Stubbs, Andrea
B. Archibald, and Roxann Roberson-Nay. 2002.
Constructing Knowledge: The Role of Graphs and
Tables in Hard and Soft Psychology. American Psychologist 57(10):74961.
Smith, Laurence D., Lisa A. Best, D. Alan Stubbs, John
Johnson, and Andrea B. Archibald. 2000. Scientific
Graphs in the Hierarchy of the Sciences: A Latourian
Survey of Inscription Practices. Social Studies of
Science 30(1):7394.
Smyth, Mary M. 2001. Certainty and Uncertainty Sciences: Marking the Boundaries of Psychology in
Introductory Textbooks. Social Studies of Science
31(3):389416.
Stinchcombe, Arthur L. 1994. Disintegrated Disciplines
and the Future of Sociology. Sociological Forum
9(2):27997.
Sudnow, David. 1978. Ways of the Hand: The Organization of Improvised Conduct. Cambridge, MA: Harvard University Press.

Taylor, Charles. 1985. Philosophical Papers, Vol. 2, Philosophy and the Human Sciences. Cambridge, UK:
Cambridge University Press.
Traweek, Sharon. 1992. Beamtimes and Lifetimes: The
World of High Energy Physicists. Cambridge, MA:
Harvard University Press.
Vertesi, Janet. 2012. Seeing Like a Rover: Visualization,
Embodiment and Interaction on the Mars Exploration
Rover Mission. Social Studies of Science 42(3):393
414.
Von Wright, Georg H. 1971. Explanation and
Understanding. London, UK: Routledge and
Kegan Paul.
Vygotsky, Lev S. [1927] 1997. The Historical Meaning of the Crisis in Psychology: A Methodological
Investigation. Pp. 233344 in The Collected Works
of L. S. Vygotsky, Vol. 3, edited by R. W. Rieber and J.
Wollock. New York: Plenum.
Wade, Nicolas. 2010. Harvard Finds Scientist Guilty
of Misconduct. NYTimes.com. Retrieved October
17, 2012 (http://www.nytimes.com/2010/08/21/
education/21harvard.html).
Wagenmakers, Eric-Jan, Ruud Wetzels, Denny Borsboom, and Han L. J. van der Maas. 2011. Why
Psychologists Must Change the Way They Analyze
Their Data: The Case of Psi: Comment on Bem
(2011). Journal of Personality and Social Psychology 100(3):42632.
Wallerstein, Immanuel, Calestous Juma, Evelyn F. Keller,
Jurgen Kocka, Domenique Lecourt, V. Y. Mudkimbe,
Kinhide Miushakoji, Ilya Prigogine, Peter J. Taylor,
and Michel-Rolph Trouillot. 1996. Open the Social
Sciences: Report of the Gulbenkian Commission on
the Restructuring of the Social Sciences. Palo Alto,
CA: Stanford University Press.
Waquant, Loic. 2004. Body and Soul: Notebooks of an
Apprentice Boxer. New York: Oxford University
Press.
Weinberg, Robert A. 1985. The Molecules of Life. Scientific American 253(4):4857.
Winch, Peter. 1958. The Idea of a Social Science and Its
Relation to Philosophy. London, UK: Routledge.
Woodward, James. 2005. Making Things Happen: A
Theory of Causal Explanation. New York: Oxford
University Press.
Yong, Ed. 2012a. Replication Studies: Bad Copy.
Nature 485(7398):298300.
Yong, Ed. 2012b. Nobel Laureate Challenges Psychologists to Clean up Their Act. Nature.com. Retrieved
March 27, 2013 (http://www.nature.com/news/nobellaureate-challenges-psychologists-to-clean-up-theiract-1.11535).
Yong, Ed. 2012c. A Failed Replication Draws a Scathing Personal Attack from a Psychology Professor.
Discovermagazine.com. Retrieved March 27, 2013
(http://blogs.discovermagazine.com/notrocketscience/2012/03/10/failed-replication-bargh-psychol
ogy-study-doyen/#.UVMv9JMQZvl).

Downloaded from asr.sagepub.com at EM LYON on December 10, 2015

Peterson

1225

Young, Iris M. 2005. On Female Body Experience:


Throwing like a Girl and Other Essays. Oxford,
UK: Oxford University Press.
Zuckerman, Harriet A., and Robert K. Merton. 1971.
Patterns of Evaluation in Science: Institutionalisation, Structure and Functions of the Referee System.
Minerva 9(1):66100.
Zuckerman, Harriet A., and Robert K. Merton. 1972.
Age, Aging, and Age Structure in Science. Pp.
292356 in A Sociology of Age Stratification, edited
by M. E. Johnson and A. Foner. New York: Russell
Sage Foundation.

David Peterson is a doctoral candidate at Northwestern


University. His interests include the sociology of science
and knowledge, culture and cognition, and economic
sociology. He has previously published research on the
limits of social constructionism, the dynamics of emergence and reductionism in multilevel health models, and
the use of institutional failures as natural breaching
experiments. His dissertation compares scientific
research practices in the natural and social sciences in
order to better understand long-standing debates regarding the epistemological status of the social sciences.

Downloaded from asr.sagepub.com at EM LYON on December 10, 2015

You might also like