You are on page 1of 11

Opinion poll

An opinion poll, sometimes simply referred to as a poll,


is a survey of public opinion from a particular sample.
Opinion polls are usually designed to represent the opinions of a population by conducting a series of questions and then extrapolating generalities in ratio or within
condence intervals.

or dissent with this question asked by appeasement politician and future collaborationist Marcel Dat.
Gallup launched a subsidiary in the United Kingdom that,
almost alone, correctly predicted Labours victory in the
1945 general election, unlike virtually all other commentators, who expected a victory for the Conservative Party,
led by Winston Churchill.
The Allied occupation powers helped to create survey institutes in all of the Western occupation zones of Germany in 1947 and 1948 to better steer denazication.

History

The rst known example of an opinion poll was a local straw poll conducted by The Harrisburg Pennsylvanian in 1824, showing Andrew Jackson leading John
Quincy Adams by 335 votes to 169 in the contest for
the United States Presidency. Since Jackson won the
popular vote in that state and the whole country, such
straw votes gradually became more popular, but they remained local, usually city-wide phenomena. In 1916,
the Literary Digest embarked on a national survey (partly
as a circulation-raising exercise) and correctly predicted
Woodrow Wilson's election as president. Mailing out
millions of postcards and simply counting the returns, the
Digest correctly predicted the victories of Warren Harding in 1920, Calvin Coolidge in 1924, Herbert Hoover in
1928, and Franklin Roosevelt in 1932.

By the 1950s, various types of polling had spread to most


democracies.

forecasting using scientic polls.[1] He predicted the reelection of President Franklin D. Roosevelt three times,
in 1936, 1940, and 1944. Louis Harris had been in the
eld of public opinion since 1947 when he joined the
Elmo Roper rm, then later became partner.

Recently, statistical learning methods have been proposed


in order to exploit Social Media content (such as posts
on the micro-blogging platform of Twitter) for modelling
and predicting voting intention polls.[2][3]

2 Sample and polling methods

Opinion polls for many years were maintained through


telecommunications or in person-to-person contact.
Methods and techniques vary, though they are widely accepted in most areas. Verbal, ballot, and processed types
can be conducted eciently, contrasted with other types
Then, in 1936, its 2.3 million voters constituted a of surveys, systematics, and complicated matrices beyond
huge sample; however, they were generally more au- previous orthodox procedures.
ent Americans who tended to have Republican sympa- Opinion polling developed into popular applications
thies. The Literary Digest was ignorant of this new bias. through popular thought, although response rates for
The week before election day, it reported that Alf Lan- some surveys declined. Also, the following has also led
don was far more popular than Roosevelt. At the same to dierentiating results:[1] Some polling organizations,
time, George Gallup conducted a far smaller, but more such as Angus Reid Public Opinion, YouGov and Zogby
scientically based survey, in which he polled a demo- use Internet surveys, where a sample is drawn from a large
graphically representative sample. Gallup correctly pre- panel of volunteers, and the results are weighted to reect
dicted Roosevelts landslide victory. The Literary Digest the demographics of the population of interest. In consoon went out of business, while polling started to take trast, popular web polls draw on whoever wishes to paro.
ticipate, rather than a scientic sample of the population,
Elmo Roper was another American pioneer in political and are therefore not generally considered professional.

Polls can be used in the public relation eld as well. In


the early 1920s Public Relation experts described their
work as a two-way street. Their job would be to present
the misinterpreted interests of large institutions to public.
They would also gauge the typically ignored interests of
the public through polls.

In September 1938 Jean Stoetzel, after having met


Gallup, created IFOP, the Institut Franais d'Opinion
Publique, as the rst European survey institute in Paris
and started political polls in summer 1939 with the question "Why die for Danzig?", looking for popular support
1

2.1

3 POTENTIAL FOR INACCURACY

Benchmark polls

tracking poll uses the data from the past week and discards older data.

A benchmark poll is generally the rst poll taken in a campaign. It is often taken before a candidate announces their
bid for oce but sometimes it happens immediately following that announcement after they have had some opportunity to raise funds. This is generally a short and simple survey of likely voters.

A caution is that estimating the trend is more dicult and


error-prone than estimating the level intuitively, if one
estimates the change, the dierence between two numbers X and Y, then one has to contend with the error in
both X and Y it is not enough to simply take the dierence, as the change may be random noise. For details, see
A benchmark poll serves a number of purposes for a cam- t-test. A rough guide is that if the change in measurement
paign, whether it is a political campaign or some other falls outside the margin of error, it is worth attention.
type of campaign. First, it gives the candidate a picture
of where they stand with the electorate before any campaigning takes place. If the poll is done prior to announcing for oce the candidate may use the poll to decide 3 Potential for inaccuracy
whether or not they should even run for oce. Secondly,
it shows them where their weaknesses and strengths are in Polls based on samples of populations are subject to
two main areas. The rst is the electorate. A benchmark sampling error which reects the eects of chance and
poll shows them what types of voters they are sure to win, uncertainty in the sampling process. The uncertainty is
those who they are sure to lose, and everyone in-between often expressed as a margin of error. The margin of erthose two extremes. This lets the campaign know which ror is usually dened as the radius of a condence interval
voters are persuadable so they can spend their limited re- for a particular statistic from a survey. One example is
sources in the most eective manner. Second, it can give the percent of people who prefer product A versus prodthem an idea of what messages, ideas, or slogans are the uct B. When a single, global margin of error is reported
strongest with the electorate.[4]
for a survey, it refers to the maximum margin of error
for all reported percentages using the full sample from
the survey. If the statistic is a percentage, this maximum
margin of error can be calculated as the radius of the con2.2 Brushre polls
dence interval for a reported percentage of 50%. Others
Brushre Polls are polls taken during the period between suggest that a poll with a random sample of 1,000 peothe Benchmark Poll and Tracking Polls. The number of ple has margin of sampling error of 3% for the estimated
Brushre Polls taken by a campaign is determined by how percentage of the whole population.
competitive the race is and how much money the cam- A 3% margin of error means that if the same procedure
paign has to spend. These polls usually focus on likely is used a large number of times, 95% of the time the true
voters and the length of the survey varies on the number population average will be within the 95% condence inof messages being tested.
terval of the sample estimate plus or minus 3%. The
Brushre polls are used for a number of purposes. First,
it lets the candidate know if they have made any progress
on the ballot, how much progress has been made, and
in what demographics they have been making or losing
ground. Secondly, it is a way for the campaign to test a
variety of messages, both positive and negative, on themselves and their opponent(s). This lets the campaign know
what messages work best with certain demographics and
what messages should be avoided. Campaigns often use
these polls to test possible attack messages that their opponent may use and potential responses to those attacks.
The campaign can then spend some time preparing an effective response to any likely attacks. Thirdly, this kind
of poll can be used by candidates or political parties to
convince primary challengers to drop out of a race and
support a stronger candidate.

margin of error can be reduced by using a larger sample, however if a pollster wishes to reduce the margin of
error to 1% they would need a sample of around 10,000
people.[6] In practice, pollsters need to balance the cost
of a large sample against the reduction in sampling error and a sample size of around 5001,000 is a typical
compromise for political polls. (Note that to get complete responses it may be necessary to include thousands
of additional participators.)[7]

Another way to reduce the margin of error is to rely on


poll averages. This makes the assumption that the procedure is similar enough between many dierent polls
and uses the sample size of each poll to create a polling
average.[8] An example of a polling average can be found
here: 2008 Presidential Election polling average. Another source of error stems from faulty demographic
models by pollsters who weigh their samples by particular variables such as party identication in an election.
2.3 Tracking polls
For example, if you assume that the breakdown of the US
population by party identication has not changed since
A tracking poll is a poll repeated at intervals generally av- the previous presidential election, you may underestimate
eraged over a trailing window.[5] For example, a weekly a victory or a defeat of a particular party candidate that

3.3

Wording of questions

saw a surge or decline in its party registration relative to 3.3 Wording of questions
the previous presidential election cycle.
It is well established that the wording of the questions, the
Over time, a number of theories and mechanisms have
order in which they are asked and the number and form of
been oered to explain erroneous polling results. Some
of these reect errors on the part of the pollsters; many alternative answers oered can inuence results of polls.
For instance, the public is more likely to indicate support
of them are statistical in nature. Others blame the respondents for not giving candid answers (e.g., the Bradley for a person who is described by the operator as one of the
leading candidates. This support itself overrides subtle
eect, the Shy Tory Factor); these can be more controbias for one candidate, as does lumping some candidates
versial.
in an other category or vice versa. Thus comparisons
between polls often boil down to the wording of the question. On some issues, question wording can result in quite
3.1 Nonresponse bias
pronounced dierences between surveys.[10][11][12] This
can also, however, be a result of legitimately conicted
Since some people do not answer calls from strangers, feelings or evolving attitudes, rather than a poorly conor refuse to answer the poll, poll samples may not be structed survey.[13]
representative samples from a population due to a nonA common technique to control for this bias is to rotate
response bias. Because of this selection bias, the charthe order in which questions are asked. Many pollsters
acteristics of those who agree to be interviewed may be
also split-sample. This involves having two dierent vermarkedly dierent from those who decline. That is, the
sions of a question, with each version presented to half
actual sample is a biased version of the universe the pollthe respondents.
ster wants to analyze. In these cases, bias introduces new
errors, one way or the other, that are in addition to er- The most eective controls, used by attitude researchers,
rors caused by sample size. Error due to bias does not are:
become smaller with larger sample sizes, because taking
a larger sample size simply repeats the same mistake on
asking enough questions to allow all aspects of an
a larger scale. If the people who refuse to answer, or are
issue to be covered and to control eects due to
never reached, have the same characteristics as the people
the form of the question (such as positive or negawho do answer, then the nal results should be unbiased.
tive wording), the adequacy of the number being esIf the people who do not answer have dierent opinions
tablished quantitatively with psychometric measures
then there is bias in the results. In terms of election polls,
such as reliability coecients, and
studies suggest that bias eects are small, but each polling
analyzing the results with psychometric techniques
rm has its own techniques for adjusting weights to min[9]
which synthesize the answers into a few reliable
imize selection bias.
scores and detect ineective questions.

3.2

Response bias

Survey results may be aected by response bias, where


the answers given by respondents do not reect their
true beliefs. This may be deliberately engineered by unscrupulous pollsters in order to generate a certain result
or please their clients, but more often is a result of the detailed wording or ordering of questions (see below). Respondents may deliberately try to manipulate the outcome
of a poll by e.g. advocating a more extreme position than
they actually hold in order to boost their side of the argument or give rapid and ill-considered answers in order
to hasten the end of their questioning. Respondents may
also feel under social pressure not to give an unpopular
answer. For example, respondents might be unwilling to
admit to unpopular attitudes like racism or sexism, and
thus polls might not reect the true incidence of these attitudes in the population. In American political parlance,
this phenomenon is often referred to as the Bradley eect.
If the results of surveys are widely publicized this eect
may be magnied - a phenomenon commonly referred to
as the spiral of silence.

These controls are not widely used in the polling industry.

3.4 Coverage bias


Another source of error is the use of samples that are not
representative of the population as a consequence of the
methodology used, as was the experience of the Literary
Digest in 1936. For example, telephone sampling has a
built-in error because in many times and places, those
with telephones have generally been richer than those
without.
In some places many people have only mobile telephones.
Because pollsters cannot use automated dialing machines
to call mobile phones (it is unlawful in the United States
to use automated dialing machines to reach phones where
the phones owner may be charged simply for taking a
call[14] ), these individuals are typically excluded from
polling samples. There is concern that, if the subset
of the population without cell phones diers markedly
from the rest of the population, these dierences can
skew the results of the poll. Polling organizations have

5 INFLUENCE

developed many weighting techniques to help overcome


these deciencies, with varying degrees of success. Studies of mobile phone users by the Pew Research Center
in the US, in 2007, concluded that cell-only respondents are dierent from landline respondents in important ways, (but) they were neither numerous enough nor
dierent enough on the questions we examined to produce a signicant change in overall general population
survey estimates when included with the landline samples and weighted according to US Census parameters on
basic demographic characteristics.[15]

Despite the polling organizations using dierent methodologies, virtually all the polls taken before the vote, and
to a lesser extent, exit polls taken on voting day, showed a
lead for the opposition Labour party, but the actual vote
gave a clear victory to the ruling Conservative party.
In their deliberations after this embarrassment the pollsters advanced several ideas to account for their errors,
including:
Late swing Voters who changed their minds shortly before voting tended to favour the Conservatives, so
the error was not as great as it rst appeared.

This issue was rst identied in 2004,[16] but came to


prominence only during the 2008 US presidential election.[17] In previous elections, the proportion of the gen- Nonresponse bias Conservative voters were less likely
eral population using cell phones was small, but as this
to participate in surveys than in the past and were
proportion has increased, there is concern that polling
thus under-represented.
only landlines is no longer representative of the general
population. In 2003, only 2.9% of households were wire- The Shy Tory Factor The Conservatives had suered
a sustained period of unpopularity as a result of ecoless (cellphones only), compared to 12.8% in 2006.[18]
nomic diculties and a series of minor scandals,
This results in "coverage error". Many polling organileading to a spiral of silence in which some Consations select their sample by dialling random telephone
servative supporters were reluctant to disclose their
numbers; however, in 2008, there was a clear tendency
sincere intentions to pollsters.
for polls which included mobile phones in their samples
to show a much larger lead for Obama, than polls that did
not.[19][20]
The relative importance of these factors was, and remains, a matter of controversy, but since then the polling
The potential sources of bias are:[21]
organizations have adjusted their methodologies and have
1. Some households use cellphones only and have no achieved more accurate results in subsequent election
landline. This tends to include minorities and campaigns.
younger voters; and occurs more frequently in
metropolitan areas. Men are more likely to be
cellphone-only compared to women.
4 Failures
2. Some people may not be contactable by landline
from Monday to Friday and may be contactable only A widely publicized failure of opinion polling to date in
the United States was the prediction that Thomas Dewey
by cellphone.
would defeat Harry S. Truman in the 1948 US pres3. Some people use their landlines only to access the idential election. Major polling organizations, includInternet, and answer calls only to their cellphones.
ing Gallup and Roper, indicated a landslide victory for
Dewey.
Some polling companies have attempted to get around
that problem by including a cellphone supplement. In the United Kingdom, most polls failed to predict the
There are a number of problems with including cell- Conservative election victories of 1970 and 1992, and
Labours victory in 1974. However, their gures at other
phones in a telephone poll:
elections have been generally accurate.
1. It is dicult to get co-operation from cellphone
users, because in many parts of the US, users are
charged for both outgoing and incoming calls. That
means that pollsters have had to oer nancial compensation to gain co-operation.
2. US federal law prohibits the use of automated dialling devices to call cellphones (Telephone Consumer Protection Act of 1991). Numbers therefore
have to be dialled by hand, which is more timeconsuming and expensive for pollsters.

5 Inuence
5.1 Eect on voters

By providing information about voting intentions, opinion polls can sometimes inuence the behavior of electors, and in his book The Broken Compass, Peter Hitchens
asserts that opinion polls are actually a device for inuencing public opinion.[22] The various theories about
An oft-quoted example of opinion polls succumbing to how this happens can be split into two groups: banderrors occurred during the UK General Election of 1992. wagon/underdog eects, and strategic (tactical) voting.

5.2

Eect on politicians

A bandwagon eect occurs when the poll prompts voters to back the candidate shown to be winning in the
poll. The idea that voters are susceptible to such eects
is old, stemming at least from 1884; William Sare reported that the term was rst used in a political cartoon
in the magazine Puck in that year.[23] It has also remained
persistent in spite of a lack of empirical corroboration
until the late 20th century. George Gallup spent much
eort in vain trying to discredit this theory in his time
by presenting empirical research. A recent meta-study
of scientic research on this topic indicates that from the
1980s onward the Bandwagon eect is found more often
by researchers.[24]

dierence is that a voter will go and seek new information


to form their mental list, thus becoming more informed
of the election. This may then aect voting behaviour.

The opposite of the bandwagon eect is the underdog effect. It is often mentioned in the media. This occurs when
people vote, out of sympathy, for the party perceived to
be losing the elections. There is less empirical evidence
for the existence of this eect than there is for the existence of the bandwagon eect.[24]

5.2 Eect on politicians

The second category of theories on how polls directly


aect voting is called strategic or tactical voting. This
theory is based on the idea that voters view the act of
voting as a means of selecting a government. Thus
they will sometimes not choose the candidate they prefer on ground of ideology or sympathy, but another, lesspreferred, candidate from strategic considerations. An
example can be found in the United Kingdom general
election, 1997. As he was then a Cabinet Minister,
Michael Portillo's constituency of Eneld Southgate was
believed to be a safe seat but opinion polls showed the
Labour candidate Stephen Twigg steadily gaining support, which may have prompted undecided voters or supporters of other parties to support Twigg in order to remove Portillo. Another example is the boomerang eect
where the likely supporters of the candidate shown to be
winning feel that chances are slim and that their vote is
not required, thus allowing another candidate to win.
In addition, Mark Pickup in Cameron Anderson and
Laura Stephensons Voting Behaviour in Canada outlines three additional behavioural responses that voters
may exhibit when faced with polling data.
The rst is known as a cue taking eect which holds
that poll data is used as a proxy for information about
the candidates or parties. Cue taking is based on the
psychological phenomenon of using heuristics to simplify
a complex decision (243).[25]
The second, rst described by Petty and Cacioppo (1996)
is known as cognitive response theory. This theory asserts that a voters response to a poll may not line with
their initial conception of the electoral reality. In response, the voter is likely to generate a mental list in
which they create reasons for a partys loss or gain in the
polls. This can reinforce or change their opinion of the
candidate and thus aect voting behaviour.
Third, the nal possibility is a behavioural response
which is similar to a cognitive response. The only salient

These eects indicate how opinion polls can directly affect political choices of the electorate. But directly or
indirectly, other eects can be surveyed and analyzed on
all political parties. The form of media framing and party
ideology shifts must also be taken under consideration.
Opinion polling in some instances is a measure of cognitive bias, which is variably considered and handled appropriately in its various applications.

Starting in the 1980s, tracking polls and related technologies began having a notable impact on U.S. political leaders.[26] According to Douglas Bailey, a Republican who had helped run Gerald Ford's 1976 presidential
campaign, Its no longer necessary for a political candidate to guess what an audience thinks. He can [nd out]
with a nightly tracking poll. So its no longer likely that
political leaders are going to lead. Instead, they're going
to follow.[26]

6 Regulation
Some jurisdictions over the world restrict the publication
of the results of opinion polls, especially during the period around an election, in order to prevent the possibly
erroneous results from aecting voters decisions. For instance, in Canada, it is prohibited to publish the results
of opinion surveys that would identify specic political
parties or candidates in the nal three days before a poll
closes.[27]
However, most western democratic nations don't support
the entire prohibition of the publication of pre-election
opinion polls; most of them have no regulation and some
only prohibit it in the nal days or hours until the relevant
poll closes.[28] A survey by Canadas Royal Commission
on Electoral Reform reported that the prohibition period
of publication of the survey results largely diered in different countries. Out of the 20 countries examined, 3
prohibit the publication during the entire period of campaigns, while others prohibit it for a shorter term such
as the polling period or the nal 48 hours before a poll
closes.[27] In India, the Election Commission has prohibited it in the 48 hours before the start of polling.

7 See also
Deliberative opinion poll
Entrance poll
Everett Carll Ladd

9
Exit poll
Historical polling for U.S. Presidential elections
List of polling organizations
Open access poll
Push poll
Referendum
Roper Center for Public Opinion Research
Sample size determination
Straw poll

Footnotes

[1] Cantril, Hadley; Strunk, Mildred (1951). Public Opinion, 1935-1946. Princeton University Press. p. vii.
[2] Vasileios Lampos, Daniel Preotiuc-Pietro and Trevor
Cohn. A user-centric model of voting intention from
Social Media. Proceedings of the 51st Annual Meeting
of the Association for Computational Linguistics. ACL,
pp. 993-1003, 2013. http://aclweb.org/anthology/P/P13/
P13-1098.pdf
[3] Brendan O'Connor, Ramnath Balasubramanyan, Bryan R
Routledge, and Noah A Smith. From Tweets to Polls:
Linking Text Sentiment to Public Opinion Time Series.
In Proceedings of the International AAAI Conference on
Weblogs and Social Media. AAAI Press, pp. 122129,
2010.
[4] Kenneth F. Warren (1992). in Defense of Public Opinion
Polling. Westview Press. p. 200-1.
[5] About the Tracking Polls. Cnn.com. Retrieved 201302-18.
[6] An estimate of the margin of error in percentage terms
can be gained by the formula 100 square root of sample
size
[7] publicagenda.org. publicagenda.org. Retrieved 201302-18.
[8] Lynch, Scott M. Introduction to Bayesian Statistics and Estimation for Social Scientists (2007).
[9] Langer, Gary (May 2003). About Response Rates: Some
Unresolved Questions (PDF). ABC News. Retrieved
2010-05-17.
[10] Public Agenda Issue Guide: Higher Education - Public
View - Red Flags Public Agenda. Publicagenda.org. Retrieved 2013-02-18.

REFERENCES

[13] The Seven Stages of Public Opinion Public Agenda.


Publicagenda.org. Retrieved 2013-02-18.
[14] http://transition.fcc.gov/cgb/policy/TCPA-Rules.pdf
[15] Keeter, Scott (2007-06-27). How Serious Is Pollings
Cell-Only Problem?". Pew Research Center Publications.
[16] Blumenthal, Mark (2008-09-19). More Pollsters Interviewing By Cell Phone. Pollster.com. Retrieved 200811-04.
[17] Blumenthal, Mark (2008-07-17). New Pew data on cell
phones. Pollster. Retrieved 2008-11-04.
[18] Blumberg SJ, Luke JV (2007-05-14). Wireless Substitution: Early Release of Estimates Based on Data from the
National Health Interview Survey, JulyDecember 2006
(PDF). Centers for Disease Control. Retrieved 2009-0622.
[19] Silver, Nate (2008-11-02). The Cellphone eect, continued. FiveThirtyEight.com. Retrieved 2008-11-04.
[20] Blumenthal, Mark (2008-10-17). More Cell Phone Data
from Gallup. Pollster.com. Retrieved 2008-11-04.
[21] Silver, Nate (2008-07-22). The Cellphone Problem, Revisited. FiveThirtyEight.com. Retrieved 2008-11-04.
[22] Hitchens, Peter (2009). Chapter 1, Guy Fawkes Gets a
Blackberry. The Broken Compass: How British Politics
Lost its Way. Continuum International Publishing Group
Ltd. ISBN 1-84706-405-1.
[23] Sare, William, Sares Political Dictionary, page 42.
Random House, 1993.
[24] Irwin, Galen A. and Joop J. M. Van Holsteyn. Bandwagons, Underdogs, the Titanic and the Red Cross: The Inuence of Public Opinion Polls on Voters (2000).
[25] Anderson, Cameron; Pickup, Mark (2010). 10. Voting
Behaviour in Canada. Vancouver: UBC Press. pp. 243
278.
[26] Kaiser, Robert G. (March 9, 2011). David S. Broder:
The best political reporter of his time. The Washington
Post. Retrieved 2011-03-09.
[27] Claude Emery (January 1994), Public opinion polling in
Canada, Library of Parliament, Canada
[28] Tim Bale (2002).
Restricting the broadcast and
publication of pre-election and exit polls: some selected examples.
Representation 39 (1): 1522.
doi:10.1080/00344890208523210.

9 References

[11] Public Agenda Issue Guide: Gay Rights - Public View Red Flags Public Agenda. Publicagenda.org. Retrieved
2013-02-18.

Asher, Herbert: Polling and the Public. What Every


Citizen Should Know, fourth edition. Washington,
D.C.: CQ Press, 1998.

[12] Public Agenda Issue Guide: Abortion - Public View Red Flags. Public Agenda.

Bourdieu, Pierre, Public Opinion does not exist in


Sociology in Question, London, Sage (1995).

7
Bradburn, Norman M. and Seymour Sudman. Polls
and Surveys: Understanding What They Tell Us
(1988).
Cantril, Hadley. Gauging Public Opinion (1944).
Cantril, Hadley and Mildred Strunk, eds. Public
Opinion, 1935-1946 (1951), massive compilation of
many public opinion polls from US, UK, Canada,
Australia, and elsewhere.
Converse, Jean M. Survey Research in the United
States: Roots and Emergence 1890-1960 (1987), the
standard history.
Crespi, Irving. Public Opinion, Polls, and Democracy (1989).
Gallup, George. Public Opinion in a Democracy
(1939).
Gallup, Alec M. ed. The Gallup Poll Cumulative Index: Public Opinion, 1935-1997 (1999) lists
10,000+ questions, but no results.
Gallup, George Horace, ed. The Gallup Poll; Public Opinion, 1935-1971 3 vol (1972) summarizes results of each poll.
Glynn, Carroll J., Susan Herbst, Garrett J. O'Keefe,
and Robert Y. Shapiro. Public Opinion (1999) textbook
Lavrakas, Paul J. et al. eds. Presidential Polls and
the News Media (1995)
Moore, David W. The Superpollsters: How They
Measure and Manipulate Public Opinion in America
(1995).
Niemi, Richard G., John Mueller, Tom W. Smith,
eds. Trends in Public Opinion: A Compendium of
Survey Data (1989).
Oskamp, Stuart and P. Wesley Schultz; Attitudes and
Opinions (2004).
Robinson, Claude E. Straw Votes (1932).
Robinson, Matthew Mobocracy: How the Medias
Obsession with Polling Twists the News, Alters Elections, and Undermines Democracy (2002).
Rogers, Lindsay. The Pollsters: Public Opinion, Politics, and Democratic Leadership (1949).
Traugott, Michael W. The Voters Guide to Election
Polls 3rd ed. (2004).
James G. Webster, Patricia F. Phalen, Lawrence
W. Lichty; Ratings Analysis: The Theory and Practice of Audience Research Lawrence Erlbaum Associates, 2000.

Young, Michael L. Dictionary of Polling: The Language of Contemporary Opinion Research (1992).
Additional Sources
Walden, Graham R. Survey Research Methodology,
1990-1999: An Annotated Bibliography. Bibliographies and Indexes in Law and Political Science Series. Westport, CT: Greenwood Press, Greenwood
Publishing Group, Inc., 2002. xx, 432p.
Walden, Graham R. Public Opinion Polls and Survey Research: A Selective Annotated Bibliography of
U.S. Guides and Studies from the 1980s. Public Affairs and Administrative Series, edited by James S.
Bowman, vol. 24. New York, NY: Garland Publishing Inc., 1990. xxix, 360p.
Walden, Graham R. Polling and Survey Research
Methods 1935-1979: An Annotated Bibliography.
Bibliographies and Indexes in Law and Political Science Series, vol. 25. Westport, CT: Greenwood
Publishing Group, Inc., 1996. xxx, 581p.

10 External Links
Polls from UCB Libraries GovPubs
The Pew Research Center nonpartisan fact tank
providing information on the issues, attitudes and
trends shaping America and the world by conducting
public opinion polling and social science research
Use Opinion Research To Build Strong Communication by Frank Noto
National Council on Public Polls association of
polling organizations in the United States devoted
to setting high professional standards for surveys
Survey Analysis Tool based on A. Berkopec, HyperQuick algorithm for discrete hypergeometric distribution, Journal of Discrete Algorithms, Elsevier,
2006.

10

EXTERNAL LINKS

Voter polling questionnaire on display at the Smithsonian Institution

2000 Palm Beach County voting stand and ballot box

Voter Turnout by Race-Ethnicity, 2008 US Presidential Election

10

11

11
11.1

TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES

Text and image sources, contributors, and licenses


Text

Opinion poll Source: http://en.wikipedia.org/wiki/Opinion%20poll?oldid=654413246 Contributors: Vicki Rosenzweig, SimonP, Boud,


Michael Hardy, Isomorphic, 172, Jeandr du Toit, Charles Matthews, Hlavac, Bloodshedder, Chealer, Zandperl, Jxg, Meelar, Gidonb,
Timrollpickering, Bkell, Michael Snow, Jord, Srtxg, Jpo, DocWatson42, Duncan Keith, Tom harrison, Xinoph, Jackol, Neilc, Gadum,
Fys, Quota, WOT, Klemen Kocjancic, Cab88, Rich Farmbrough, John FitzGerald, Xezbeth, Bender235, Quinten, Kwamikagami, Surachit, Aude, Harryfdoherty, Halavais, Bobo192, Stesmo, Trevj, Grutness, Alansohn, Andrew Gray, Kpwa gok, Avenue, Max Naylor, Oleg
Alexandrov, Joriki, Woohookitty, Wafry, Junes, Prashanthns, Stefanomione, Lastorset, BD2412, LanguageMan, RadioActive, Thekohser,
Fred Bradstadt, Raprat0, Fish and karate, Therearenospoons, Ground Zero, Estrellador*, JdforresterBot, Tequendamia, Bgwhite, ,,n, The
Rambling Man, YurikBot, Wavelength, Jlittlet, Lincolnite, RadioFan, Stephenb, Shell Kinney, Rjensen, RazorICE, Nick, Tachyon01,
Katpatuka, Zzuuzz, Mais oui!, Whouk, Je Silvers, Akrabbim, SmackBot, Ikip, TFMcQ, Stephensuleeman, Delldot, RobotJcb, Isaac
Dupree, XxAvalanchexX, Chris the speller, Nbarth, Darth Panda, Frsky, Shuki, Yidisheryid, Steven X, AdeMiami, Chrylis, Howard the
Duck, Gbinal, DMacks, Kendrick7, BlackTerror, Will Beback, Nelro, Kuru, John, Grumpyyoungman01, Hannah Commodore, RichardF,
Levineps, Ft93110, Joseph Solis in Australia, JHP, Chris53516, FakeTango, Ve2jgs, Tawkerbot2, JForget, Americasroof, Daob, Picaroon,
Lazulilasher, Korky Day, Mblumber, Gogo Dodo, Epbr123, Dasani, Rethas, Al Lemos, Sandossu, Yettie0711, AntiVandalBot, Mack2,
Dylan Lake, Pipacopa, Roving Wordslinger, MER-C, Mcorazao, TAnthony, Schmackity, JamesBWatson, Appraiser, Kawanazii, Recurring dreams, Destynova, Alanbrowne, Cgingold, Boob, Pax:Vobiscum, Votemania, Sk wiki, Maradif, LedgendGamer, Filll, Pollpub,
Bob98b3, NRCenter, AntiSpamBot, Flatterworld, Student7, Steel1943, Janice Vian, Sporti, Cmonday, Zigorath, CallieRyan, Una Smith,
Lingamer8, PDFbot, David Condrey, Botev, Accounting4Taste, MaesterTonberry, Flyer22, Tartuoboy, Anchor Link Bot, Melcombe,
OneLewis101, Capguru, Jean-Francois Landry, ClueBot, ImperfectlyInformed, Dvorsky, Bcnsubmit, Ottawahitech, Piledhigheranddeeper,
Jusdafax, Bruceanthro, G.R.Walden, Qwfp, Kurdo777, Roxy the dog, Jprw, Whooym, Addbot, Baloghmatt, Br1z, Member456345, Pollster09, 24Research, WatariSan, Politicas, Patriotis672, Member529, Optimistic5, Optimel4, Member564, Lincolninst, Streetpav, Stafaxis,
English Linesman, Emostafa 2008, Bhentze, Yobot, Jonathan26, AnomieBOT, Materialscientist, JohnFromPinckney, Quebec99, LilHelpa,
Cydelin, Wikipersonae, Sophus Bie, CaptainFugu, Tobby72, Math321, Citation bot 1, Blargh29, Pinethicket, Kiefer.Wolfowitz, Drcrnc,
Rplal120, Neil Chadda, Cnwilliams, Ukr-Trident, Lotje, January, Jies1, RjwilmsiBot, MMS2013, NameIsRon, Polly Ticker, YorkshireKat, John of Reading, Ajraddatz, Maccabi72, Vladimir.qq, Sungzungkim, H3llBot, Quae legit, Erianna, Natanv, Donner60, Ip82166,
ClueBot NG, Widr, Dumebi1986, MerlIwBot, Trift, Helpful Pixie Bot, Machaven, Bbzzme, Okstacie123, Redroar75, Conifer, Aisteco,
BattyBot, Tutelary, Pratyya Ghosh, Etp01, Mjh110101, Mrb08876, SD5bot, Rebskin, Futurist110, Maniesansdelire, Mmqa, JaconaFrere,
Skr15081997, Monkbot, Chaz.bush1, Avinashmjpatil, Gutinapo, MoreInformativePolls and Anonymous: 232

11.2

Images

File:Ambox_globe_content.svg Source: http://upload.wikimedia.org/wikipedia/commons/b/bd/Ambox_globe_content.svg License:


Public domain Contributors: Own work, using File:Information icon3.svg and File:Earth clip art.svg Original artist: penubag
File:Commons-logo.svg Source: http://upload.wikimedia.org/wikipedia/en/4/4a/Commons-logo.svg License: ? Contributors: ? Original
artist: ?
File:Folder_Hexagonal_Icon.svg Source: http://upload.wikimedia.org/wikipedia/en/4/48/Folder_Hexagonal_Icon.svg License: Cc-bysa-3.0 Contributors: ? Original artist: ?
File:Mergefrom.svg Source: http://upload.wikimedia.org/wikipedia/commons/0/0f/Mergefrom.svg License: Public domain Contributors:
? Original artist: ?
File:People_icon.svg Source: http://upload.wikimedia.org/wikipedia/commons/3/37/People_icon.svg License: CC0 Contributors: OpenClipart Original artist: OpenClipart
File:Portal-puzzle.svg Source: http://upload.wikimedia.org/wikipedia/en/f/fd/Portal-puzzle.svg License: Public domain Contributors: ?
Original artist: ?
File:Support_For_Direct_Popular_Vote.png Source: http://upload.wikimedia.org/wikipedia/commons/f/fa/Support_For_Direct_
Popular_Vote.png License: Public domain Contributors: Own work Original artist: Szu
File:TallahaseePalmBeachBallotBox1.jpg
Source:
http://upload.wikimedia.org/wikipedia/commons/9/94/
TallahaseePalmBeachBallotBox1.jpg License: CC BY 2.5 Contributors: Own work Original artist: Infrogmation
File:Voter_Turnout_by_Race-Ethnicity,_2008_US_Presidential_Election.png Source:
http://upload.wikimedia.org/wikipedia/
commons/f/fd/Voter_Turnout_by_Race-Ethnicity%2C_2008_US_Presidential_Election.png License: CC BY 3.0 Contributors: Own
work Original artist: Rcragun
File:Voter_poll.jpg Source: http://upload.wikimedia.org/wikipedia/commons/b/bf/Voter_poll.jpg License: CC BY-SA 3.0 Contributors:
Own work (Original text: I (RadioFan (talk)) created this work entirely by myself.) Original artist: RadioFan (talk)
File:WMF_Strategic_Plan_Survey.svg Source: http://upload.wikimedia.org/wikipedia/commons/6/69/WMF_Strategic_Plan_Survey.
svg License: CC BY-SA 4.0 Contributors: <a href='//commons.wikimedia.org/wiki/File:WMFstratplanSurvey1.png' class='image'><img
alt='WMFstratplanSurvey1.png'
src='//upload.wikimedia.org/wikipedia/commons/thumb/e/e7/WMFstratplanSurvey1.png/
100px-WMFstratplanSurvey1.png'
width='100'
height='83'
srcset='//upload.wikimedia.org/wikipedia/commons/thumb/e/e7/
WMFstratplanSurvey1.png/150px-WMFstratplanSurvey1.png
1.5x,
//upload.wikimedia.org/wikipedia/commons/thumb/e/e7/
WMFstratplanSurvey1.png/200px-WMFstratplanSurvey1.png 2x' data-le-width='1100' data-le-height='910' /></a> Original
artist: Fred the Oyster
File:Wiki_letter_w_cropped.svg Source: http://upload.wikimedia.org/wikipedia/commons/1/1c/Wiki_letter_w_cropped.svg License:
CC-BY-SA-3.0 Contributors:
Wiki_letter_w.svg Original artist: Wiki_letter_w.svg: Jarkko Piiroinen

11.3

11.3

Content license

Content license

Creative Commons Attribution-Share Alike 3.0

11

You might also like