You are on page 1of 42

Self-test questions from book:

1.1 Why study research?

- What is business research and what is a management dilemma?

 Air Swiss example in searching for new partners to team up with.  Dilemma
 Analysis of 6 possible partners on specific factors to come up with a possible
managerial proposition  Business research

- Which 3 factors have stimulated an interest in this scientific approach of decision


making?

- How did recent developments (which factors) affect business research process?

1.2 Four different types of studies represented by cases

- Reporting: To generate statistics or to provide an account, summation of data.


 Air Swiss for example  the researcher needs to know which data of other companies
he needs to look at to asses and provide statistics that build the conclusion.

- Descriptive: To discover answers to; what, who, when, where and sometimes how
questions. Describe or define a certain subject.
 Involves collection of data and examination of distribution of a single event or
characteristic (research variable), or 2 or more variables

- Explanatory: To answer why and how questions. Explains reasons why something
happens.
 Correlational study for example, where the relation between 2 or more variables is
measured.
 The use of hypothesis to account for the forces that caused a certain phenomenon.

- Predictive: If an event has happened it is desirable to predict when it might reoccur.


 Economic predictions for example  Consumer confidence, currency exchange rates
and linear trends.

1.3 Is research always problem-solving based?  Yes, all research should


provide answers to questions

Applied research: has a practical solving emphasis  Conducted in order to reveal answers
to specific questions

Pure and basic research  developing new ideas that do not answers specific
problems/question.

For example, developing an algorithm for performing certain performance checks is pure
research. Applied research would be applying this algorithm to provide answers to a
question.

1.4 What makes good research  in book page 12, 13 & 14

Basically, purposeful, clearly defined focus and plausible goals, defensible, ethical and
replicable procedures and with evidence of objectivity. Reporting of findings should be
complete and honest. Appropriate techniques should be used and conclusions should be
justified by findings. Reporting in an academic and professional tone and language.
1.5 Research philosophies; positivism and interpretivism (phenomenology)

Positivism: View of the world is external and objective (feit), researcher is independent and
value-free influence. Assumptions that are observed are objective, quantitative.  three
categories: True, False or Meaningless (none of both), usually large samples.

For example, KPI of a company to assess performance large sample size is needed for a
representative conclusion! (quantitative research)

An interpretivist is more interested in the subjective interpretations of employees or


employers to analyse a company’s performance, fewer samples are needed. (Qualitative
interviews)

Interpretivism: View of the world is subjective (mening), research is part of research,


subjective interpretations of observation.  Action research, smaller sample sizes.

Realism: What we perceive is really real, and not in our minds or an illusion.

1.6 Scientific reasoning: deduction and induction


Deduction: conclusion must come from the reasons given, it must be true and valid. The
reasons must agree with the real world.
- All regular employees are trusted not to steal (premise 1)
- John is a regular employee (premise 2)
- John is trusted not to steal (conclusion)

Induction: to draw a conclusion from one or more facts. The conclusion explains the facts,
and the facts explain the conclusion. The task of research is largely to confirm or reject
hypothesis and to design methods to discover and measure other evidence.

Induction is used to draw hypothesis. For example, the failure of a marketing campaign can
be induced by a poor execution of the campaign, but also by a hurricane in the city.

Combining induction and deduction: Induction occurs when we observe a fact and ask “why
is this?” In order to answer this question, we come up with a hypothesis, deduction is the
process where we find facts that support the hypothesis.

Empirical data  Originated or based on observation or experience, rather than theory and
facts.

Scientific method: Induction, deduction, observation and hypothesis combined

1.7 Understanding theory: components and connectors

Concept: Collection of meanings or characteristics associated with certain events, objects,


conditions, situations and behaviours.
- Gestalt psychology: Borrowing words from other languages to describe a concept
- Impressionism: Other fields of expertise to describe a concept, for example art.
(velocity is borrowed by economist from a physicist)

Height, width, depth, profit, running, walking, skipping, crawling or hopping are all concepts.
They symbolize a conception of properties.

Attitudes are abstract, but we want to measure it using carefully selected concepts
- Depends on how clearly we conceptualize
- How good others understand it

Abstract concepts are called constructs: Personality, presentation, language skills are
constructs. Spelling, vocabulary, keyboard speed and manuscript errors are concepts to
measure the constructs.

Construct refers to an image or idea specifically invented for a given research or theory-
building purpose.

Operational definitions: A concept that has been operationalized to measure what it needs
to measure. For example, Class status in ‘hours of credit’.

Variable: Variable is another word for construct, but refers usually to a numerical value
- Dichotomous variable: 0 or 1 for example employed/unemployed
- Continuous variable: For example, temperature, it can take values within a certain
range
- Independent variable: Leadership style
- Dependent variable: Job satisfaction (it is dependent on the leadership style)
- Moderating variable: is a second independent variable that is believed to have a
significant contributory or contingent effect on the original IV-DV relationship.
It is normally hypothesized that the IV causes the DV to occur.

“Training (IV) will lead to higher productivity (DV), especially among young workers (MV)”

- Intervening or mediating variables (IVV): A conceptual mechanism through which IV


and MV might affect DV.

“Training (IV) will lead to higher productivity (DV) by increasing the skill level (IVV)”
- Control variables (CV): To ensure results will not be biased by for example sunshine in
above mentioned example, usually doesn’t have significant effect on the research.
Usually in research these are: Age, gender, ethnicity, place of living, place of
establishment.

“Trainings (IV) will lead to higher productivity (DV) especially among younger workers (MV) when the
sun is shining (CV) by increasing the skill level (IVV).

1.8 Propositions and hypotheses

A Proposition is a statement about a concept that may be judged as true or false. When a
proposition is formulated for empirical testing, we call it a hypothesis

- Descriptive hypothesis: These are propositions that typically state the existence, size,
form or distribution of a variable.

In Denmark (case), employment rates are on average higher (variable)

Researchers often use a research question instead of a descriptive hypothesis: What is the
employment rate in Denmark?

Relational hypothesis: Describe a relationship between two variables with respect to a


particular case:

Foreign (variable) cars are perceived by Italians (case) to be of better quality (variable) than
domestic cars.
Correlational hypothesis: state that variables occur together in a specific manner without
one causing the other.

Explanatory (causal) hypothesis: One variable leads to a change in the other. (IV leads to
change in DV)

1.9 Theory

Theory is a set of systematically interrelated concepts, definitions and propositions that are
advanced to explain and predict phenomena (facts).

Theory and research:


 Narrows range of study
 Suggests which methods are appropriate
 Suggests systems that classifies data in the most meaningful way
 Can be used to predict any further facts
 Summarizes what is known of a study

Models: Represents a phenomenon through the use of analogy

Theory is used to explain; models are used to represent


 Discriptive models: Describe behaviour of elements
 Explicative: Extend the application of well-developed theories or improve
understanding of concepts
 Simulation: clarify the structural relationship of concepts

Chapter 2: Research process and proposal

Research process: From research dilemma to the final conclusion.

Research dilemma/management dilemma: Triggers the need for investigating how the
dilemma can be solved.

Design strategy: Type, scope, purpose, time, environment


Data collection/sampling design: Sample size and how is data collected?
Question and instrument testing: Pretesting
Instrument revision: Revision/correction after testing
Data collection and preparation: Actual research phase
Data analysis and interpretation: Analysing data collected
Research reporting: Reporting data
Policy management decision: Decision based on data

Academic teamwork: Make sure to assign a team leader, who is the best in managing tasks
and groups.

2.2 Management research and measurement questions

The main research dilemma has to be divided in other, more specific questions that can be
measured.

Management question categories:


- Choice of purpose or objectives  What do we want to achieve?
- Generation and evaluation of solutions  How can we achieve what we seek?
- Troubleshooting or control situation.  Why does our department incur the highest
costs?

Exploration: An exploration phase typically begins in searching published data (orientation),


researchers often seek out people who are well informed of the topic.

When the researcher has a clear statement on the dilemma, he or she must work with the
manager to translate it into research question.

Research question  Fact oriented and information-gathering.


The question is the hypothesis of choice that best state the objectives of the research study.
Answering the research question will provide the manager with the information necessary to
make a decision.

Fine tuning research question: After exploration and literature, the project begins to
crystallize  Questions come from the exploration or several questions already have been
answered.

- Examine constructs and concepts to be measured, are they operationalized?


- If hypothesis is used, are they: Better than others? Fit the required question,
testable?
- Review the questions and break down in 2nd, third.. etc..
- Determine what evidence has to be gathered/collected
- Set the scope to establish boundaries for research.

Investigative questions: Question that reveal the specific pieces of information that one
need to know in order to answer the research question (sub questions).

Measurement questions:
- Pre-designed or pre-tested: formulated and pre-designed/tested by other
researchers (recorded in literature)  enhanced validity. Basically the questions we
ask to our respondents (boek van paashuis met vragen etc.)
- Custom-designed questions: Tailored to fit the investigative questions and
information required (new questionnaire on specific idea/topic)

2.3 Research process

Any excellent research starts with a good research problem:


- Think of theory related with your problem, if you cannot find theory that you are
probably not scientific enough
- Non trivial problems  problems that need research and not is easily answered by
people around you
- Narrowly defined, not explaining/containing theory from the whole world.
- Relevant topics. Meaningful to the field.

Issues regarding problem formulation:


- Politically motivated research: Manager’s motives in seeking research is not always
obvious, they might express a need for specific information
- Ill-defined management problems: Some categories of problems are so complex and
interrelated and bound to constraints that they are not well defined and hard to
measure.

Unresearchable questions: Some questions are researchable and some are not. To be
researchable, a specific type of data collections can provide answers, for some this is not
possible.
Research design: The blueprint of fulfilling objectives and answering questions. This includes
all specific methods to measure data and to eventually answer questions. For example, case
studies, quantitative methods, qualitative methods, scaled – open ended, interviews etc

Favoured technique syndrome: Some researchers are method bound. They favour one
specific type of research above others because they are for example experienced in that
specific method or they just like it. This might blind them over other more convenient and
‘better’ options do conduct research.

Sample design: The researcher must determine who and how many people to conduct
interview on. A sample is part of population, carefully selected to represent the population.

Pilot testing: Conducted to test weaknesses in the data collection methods. Often skipped
by researchers but might result in not evaluating the method! Resulting in more costs.

Data collection:
Data: facts presented to researcher through environment.
- Abstractness: Some data is more metaphorical than real, for example measurement
of an effect
- Verifiability: When sensory experiences consistently produce the same result, our
data is said to be trustworthy, since it can be verified.
- Elusiveness: (difficult to capture) time-bound nature and speed, for example data
from the 80’s is not relevant anymore for today.
- Closeness to the phenomenon: Reflects truthfulness (closeness) tot the
phenomenon/variable

Company database strip-mining: The amount of data that is already available within an
organisation might distract managers for doing further research. Rarely this information or
method will answer all management questions related to a particular management dilemma.

Analysis and interpretation:


Data analysis: Usually involves reducing accumulated data to a manageable amount of data,
developing summaries, looking for patterns and applying statistical techniques.
- For example, 5000 telephone respondents leave their opinion, statistical techniques
can be used to give, one general idea for all respondents.

Reporting the results:


- Insightful adaption of the information to clients’ needs
- Careful choice of words in crafting interpretations, conclusion and recommendations.
Examples of stated above is an executive summary, introduction with background, technical
appendices and section of implementation and strategies of implementation.

Resource allocation and budgets


- Rule of thumb: Taking a fixed percentage of for example annual revenue as a basis
for determining market research.
- Departmental or functional are budgeting allocates a portion of total expenditures in
the unit to research.
- Task budgeting selects specific research projects to support on ad hoc basis. Supports
definitive cost-benefit analysis.

Research evaluation:
- Ex post facto: After the event  for example evaluation of a marketing campaign.
Often too late to base management decisions on.
- Prior or interim evaluations: Evaluations in advance or during the fact (research or
event)

2.4 Research proposal

 Includes decisions made in an early stage of project planning, including research


questions management questions and hierarchy and exploration.

Content:
- Statement of research questions
- Brief description of research methodology

Purpose:
- Present the management and research questions and relate its importance
- Discusses the research efforts of others who have worked on related management
questions
- Suggests the data necessary in solving the questions and how data will be gathered
and interpreted.

Sponsor uses:
All research has a sponsor in one way or another, it is easy for the sponsor to evaluate if the
research goals has been achieved. Proposals are usually submitted in response to a request
for a bid or request of proposal (RFP)

Research benefits:
- Gives guidance for the researchers and for the sponsors
- Gives the sponsor an insight in the research questions, so that he can evaluate if the
problem is covered.
- Time and costs are covered for both parties.

Types of proposals:
Internal Proposal: Within a company, usually small and solicited (RFP, requested for
proposal)
External Proposal: can be requested or not. Likely to compete to other researchers to ‘get
the job’. A consulting firm helping a large financial institute in a certain problem.
Critical path method (CMP): When a research is large and complex, make a CMP that
includes all steps and methods used in the research.

Chapter 3: Literature review

Has to be done in order to:


- Establish context or topic by referencing to previous work
- Understanding the structure of problem
- Theories and ideas related to problem
- Identifying relevant variables and relations
- Previous research what has been done
- Show which research methods have been chosen
- Show what needs to be done
- Gain a new perspective on the problem

 Scientific literature review: referencing to previous work.


The first function of a literature review is to embed the current study in the existing structure
of knowledge.
General problem of literature reviews  Authors have different styles and perspectives of
thinking and writing, therefore not all literature reviews will include the same topics and
writing style. An economist and a geologist, will both write different literature studies
concerning a specific problem using the same sources.

Meta-analysis: a quantitative method of combining previous studies or experiments to


compare or summarize the selected studies.
- Advantages: very structured approach to summarize the cumulated knowledge, often
able to detect relationships.
- Disadvantage: it can compare apples and pears, leaving out important factors and
variables that in some studies have not been discussed and in others have. Another
disadvantage is that it is a tremendous effort and time consuming.

Ingredients of a good literature review:


- Basic: Ensure that it will give a decent account of literature and inform the reader
about what has done so far
- Seasoning: Make it your own work as you reflect thoughts and assessments to
current literature. It also points out why your current study makes an important
contribution to the field.

Critical review: For example, book reviews on academic books where scientists leave their
assessment about a specific book. The objective is to assess the quality of the text and to
provide a short summary of the content.

- Most journals use ‘peer reviews’ to decide which manuscripts they publish.
The academic will ask 3 scientists who have published literature on the same topic to
leave their critical review, in order to evaluate and confirm the statements. It also
leaves an option for the writer to improve his or hers manuscript.
- Researchers do not only criticize but they are also very constructive and offer
solutions.

Structure of a critical review is:


1. Introduction
2. summary
3. Critique
4. Conclusion

Literature sources
- Primary sources: Full-text publications of theoretical and empirical (observations and
subjective) studies (Journals, reports, pre-publications, academic books)
- Secondary sources: Combinations of primary sources, made to conduct a summary or
a new literature review (papers, bibliographies, directories)

Useful criteria and restrictions to assess literature:


- Time of publication
- Relevance to study and does it add to the arguments and info I offer?
- Academic field
- The scope of the study, (broader is better than narrow, but also is determined by
your own study)

Chapter 4 Ethics

4.1 What is ethics in research?


 study of ‘right behaviour’ and addresses the question of how to conduct research in a
moral and responsible way.
Thus how to use methodology in a proper way to conduct sound research, without harming
respondents and those involved in research or suffer negative consequences from research
activities.

Deontology: The ends never justify the means that are questionable on ethic grounds
Teleology: The morality of the means has to be judged by the ends it servers. The benefits of
a research/study are weighted against the costs of harming people involved.

4.2 ethical treatment of participants


- explain benefits of study
- explain the participants’ rights and protection
- obtain informed consent

Consent means permission for something to happen (synonym: agreement)

Benefits of a research should be discussed, for example: help in tackling a certain medical
problem.

Deception: When participants are told only one part of the truth and are being misleaded,
this often happens to:
- prevent biasing before the experiment
- prevent the confidentiality of the sponsor

Informed consent: Fully disclosing the procedures of the research before requesting
permission (agreement/consent)

Debriefing participants (nabespreking)  Several activities that follow the collection of


data.
- Explanation of any deception
- Description of the hypothesis, goal or purpose of study
- Post-study sharing results
- Post-study follow-up for medical or psychological attention

Right to privacy and confidentiality and protecting it


- Obtaining signed documents
- Revealing participant’s info only with written consent
- Restricting access to data instruments where participants are identified
- Non-disclosure of data sub-sets

4.4 Ethics and sponsor

the Sponsor (opdrachtgever) has the right to receive research that has been conducted
ethically. Confidentiality regarding sponsors:

Sponsor non-disclosure  The right to dissociate themselves from sponsorship (research)


Purpose non-disclosure  Protecting the purpose of the study or its details
Findings non-disclosure  Confidentiality on the findings until management decision has
been made

Right to quality research


- Providing a design that is appropriate for the research question
- Maximizing the sponsors value for the resources expended
- Providing data handling and reporting techniques appropriate for the data collected.

Ethical and unethical researchers and sponsors:


Some sponsors would like the researcher the behave unethical in gaining information from
parties. What the researcher can do is educate the sponsor, how distorting the truth or
breaking faith with participants might lead to future problems. Or simply terminate the
partnership with the sponsor.

Chapter 5 Quantitative and qualitative research

Epistomology theory of knowledge is concerned with the question how one acquires
knowledge. The choice for quantitative or qualitative is an epistemological question.

 Positivisitic researchers acquisition process consists of deducting hypotheses


(explanations) for testing the reality (quantitative)
 Interpretivistic researchers acquire knowledge more by developing an understanding of
phenomena through inductive deep-level investigation and analysis of those phenomena.
(qualitative)

The difference between positivism and interpretivism is not completely identical to


quantitative and qualitative.

The choice of which method you are going to use depends on the following questions:
- What is the research problem?
- What is the objective, what kind of outcome are you looking for?
- What kind of information do you want to obtain and what do you already have access
to?
- Are you attempting to conduct a descriptive, explorative, causal or predictive study?

The quality of research does not depend on the method you are using but depends on the
quality of its design and how well it is conducted.

Methods of advancement and edges of knowledge:


 When you are conducting scientific research, the main objective is gaining new
knowledge.
When you are conducting research on a new topic you can get published in new journals and
of course the available research methods for that topic are not yet known or advance. When
you are conducting research that has been researched before, advanced methods are
available.
New research need to methodologically very rigorous to get attention.

5.2 Research design classifications

 Research design has many definitions, but together they all imply the following essentials
of a research design. The design basically shows what the researcher will perform to gain the
final answer to the research question. From hypothesis to final analysis.
- Activity and time-based plan
- Always based on the research question
- Guides selection of sources and types of information
- Framework for specifying the relationship among study’s variables
- Outlines procedures for every activity.
The design provides answers for questions such as:
- What kind of answers is the study looking for and which methods will be applied to
find them?
- What techniques will be used to gather data?
- What kind of sampling will be used?
- How will time and cost constraints be dealt with?

Descriptor of research designs:

Purpose of the study


- Descriptive  Who, what, when, where, how much
- Causal  how and why one variable changes the other
- Predictive  What will happen in the future? (Requires a very good knowledge of
causality to predict future events)

Banks and government agency’s try to predict stocks markets and financial investment, think
of how many times they get it wrong.

Degree of research question crystallization (making it simpler)


- Exploratory  Discovering future research tasks and questions. Used when the
researcher lacks a clear idea of the problem (usually students and thesis’ ;))
- Formal  Begins where explorative leaves of, goal is to provide valid representation
of the current state and to test the hypothesis or answer the research question.

Most studies start as explorative, before going to formal

Method of data collection

Monitoring  The researcher inspects activities of a subject or a nature of some materials


without attempting to draw out responses from anyone (observation).
- An observation of actions performed by a group
- License plate recordings in a car park
- Search in a library

Interrogation/communication study: the researcher questions the subject and collects their
response by personal or impersonal means.
- Interview or telephone conversation etc.

Archival sources  Secondary or primary data that is already available to the researcher.

Qualitative and quantitative studies can rely on both methods of data collection.

Control of variables:
Experimentation provides the most powerful support for a hypothesis of causation,
researchers try to control variables and or manipulate them.

Ex post facto  The researcher has no control over the variables in the sense of being able
to manipulate them. They can only report what has happened or is happening.
- Researchers should not try to manipulate this design, since this will introduce bias to
the experiment!

Time dimension

Cross-sectional studies  Are carried out once and represent a snapshot in a point of time.
Longitudinal studies  Are repeated over an extended period of time.  Does pose more
risks of bias, since with panels the same people are questioned in a period of time.

Longitudinal study has the advantage that you can measure different variables over a longer
period of time and see whether the outcome has changed  Good for causal studies

Usually studies are longitudinal! Since research is conducted over a span of time.

Panel in longitudinal studies  The researcher may study the same people over time.
- In marketing panels are set up to report consumption data on a variety of products.
Cohort groups in longitudinal studies  Use different subjects for each sequenced
measurement
- Service industry for example in assessing the service level

Research environment
Designs differ whether they occur under actual environmental conditions
(field conditions  At home, workplaces or shops they visit) or under staged or manipulated
conditions (laboratory) or even artificial (simulations  role playing)

5.3 Exploratory, descriptive and causal studies

Exploratory (verkennend) research is particularly useful when the research lack idea of the
problem. The field the researcher work in may be completely new to them so that
explorative research is needed to provide the necessary information.
 Objective is the development of hypothesis and research questions.

Exploration relies more on qualitative than quantitative:


- In-depth interviewing (Usually an explorative conversation than structured)
- Participant observations (To perceive what participants in the setting experience)
- Films, photography and videotapes (to capture the group they study)
- Projective techniques and psychological testing (Role playing, thematically
apperception)
- Case-studies (For in-depth contextual analysis of a few events or conditions)
- Street ethnography (To discover how cultural sub-groups describes and structures at
a street level)
- Elite or expert interviewing (To obtain information from well-informed people in an
organization or community)
- Document analysis (to evaluate historical or contemporary confidential public
records, reports and documents)
- Proxemics and kinesics (To study the space and body-motion communication)

When these methods are combined, 4 exploratory techniques emerge:

1. Secondary data analysis


It is inefficient to start your own research and literature, therefore the use of secondary and
primary data can be used. To see what already has been done in previous studies and which
methods work best.
- Published documents in external environment
- Organizations own data base
- Secondary sources

2. Experience surveys
When we conduct experience surveys, we would like to know their ideas about important
issues or aspects of the subject, and discover what is important across the subjects’ range of
knowledge. Questions that emerge during the interview:
- What is being done?
- What has been tried in the past without success and with success?
- How have things changed?
- What are the change producing elements?
- Who is involved in decisions and what role does each person play…...

3. Focus groups
Often used for a new product or product concept. The output of the session is a list of ideas
and behavioural observations, with recommendations by the moderator.
 can be used to measure research questions and to evaluate methods and design.
 Useful method in pre-testing questionnaires, experiments and so an. Because the focus
group often contain people who could be respondents.

4. Two stage design


If direction of project is not clear, it is wise to follow the 2 stage design.
1. Clearly define the research question
2. Developing the research design

Descriptive studies  Formal studies: structured with clearly stated hypotheses or


investigative questions
- Descriptions of phenomena or characteristics associated with population
- Estimates proportion of a population that have these characteristics
- Discovery of associations among different variables

Causal studies: To seek the effect that a variable has on another, or why certain outcomes
are obtained.

Method of agreement:

Causal relationships

- Symmetrical; Two variables fluctuate together, but we assume changes in neither


variable are due to changes in the other
- Reciprocal; two variables mutually influence or reinforce each other. For example,
the reading of an advertisement leads to the use of a product, and results in reading
more advertisements of that brand/product.
- Asymmetrical; One variable is responsible for changes in the other (IV is responsible
for DV to change)  Researchers and analysist look for these causal relationships

When it is not clear which variable is dependent or independent


 The relatively unaltered variable is the independent variable (age, social status)
 Time order, the independent (IV) comes before the dependent variable (DV)

4 types of asymmetrical relationships:

- Stimulus-response: An event or change results in a (stimulus) response from some


object
- Property-disposition: An existing property causes a disposition  An attitude or
characteristic towards something  Age and attitudes about saving
- Disposition-behaviour: A disposition causes a specific behaviour  opinions about a
brand
- Property-behaviour: An existing property causes a specific behaviour  Social class
and family saving patterns
Testing causal hypotheses:
You can never be certain that variable A causes B to occur. We seek three types of evidence
to test causal hypotheses:

Covariance between A and B


- Does A and B occur together?
- When A does not occur, is there also an absence in B?
- When there is more or less in A, is there also more or less in B?
Time order of events
- Does A occur before B? or the other way around
No other possible causes of B
- Do for instance C, D, E not co-vary with B in a way that suggest other relationships?

Causation and experimental design:


- Each variable should be held constant, and not cofounded with a variable that is not
part of the experiment/study.
- Random assignment: Each person must have an equal chance of exposure to each
level of independent variable.
- Control Group: one or more experimental groups, and the other group is a control
group that does not receive any appeals.

Randomization: Is the basic method in which equivalence between experimental and control
group is established, they should be equal. It is best to assign subjects to either experimental
or control group at random until they are filled.

Matching: We need to be sure that the subjects in each group are ‘matched’ with the same
classification/characteristics. For example 1 group contains 5 people above 50 and the other
only students of -20, than matching has to be done to assure both groups have the same
amount of ‘variables’.

Causation and ex-post facto design


For some causational studies it is not appropriate to use random samples or it’s hard to
establish a control and an experimental group, but to first establish and gather evidence
regarding the topic. Then, make a cross-classification comparison. In this way you can
determine if there is a relationship between the variables.

For example, does innovation drive profits?


 First a list of potentials has to be made with the variables that will be tested for example,
innovation level of company and profitability.
 Time order, was the company already profitable before innovation was put to practice?
 make a cross-classification table to determine if there is a causal relationship

When this has been met, a sample can be made of the firms.
 If you use non-innovative as well as innovative firms, you can establish a random sample
between the subjects.

Post hoc fallacy:


Ex-post facto analysis is very hard, since the co-variation between variables must be
interpreted very carefully. Post hoc fallacy is used to describe these unwarranted
conclusions.

- If X precedes Y, this equals that X caused Y or came from X


The point is that because something comes before something else, this does not necessitate
the former of causing the latter.
Chapter 6 Sampling strategies

Unit of analysis  Describes the level at which the research is performed and which objects
are researched. People or individuals are a common unit of analysis.
- Thinking carefully about a study’s unit of analysis can prevent difficulties and error
that may occur later in the problem definition and research design.

The choice of unit of analysis is related to the following 3 questions


- What is our research problem and what do we really want to answer?
- What do we need to measure to answer our research problem?
- What do we want to do with the results of the study? To whom do we address it in
our conclusions?

6.2 The nature of sampling


Population element: Subject on which the measurement is taken from the population
Population: is the total collection of subjects/elements
Census (qualitative): A count of all the elements in the population, includes all information.
4000 elements in the population, a census includes all information of all 4000.

Reasons for sampling


- Low cost
Samples are of low cost than a census (qualitative), imagine interviewing 4000 people
instead of a sample of a few hundred
- Greater accuracy
Possibility of better testing or interviewing (research has shown)
- Greater speed of data collection
Sampling speed reduces the time between recognition of a need for information and
the availability of that information
- Availability of populations elements
Some situations require sampling, as a census requires all elements to be questioned,
a sample only needs a couple. For example, research of breaking of a metal, with a
census you need to destroy all objects, with sampling just a dozen.

6.3 Sampling versus census

Census:
- Feasible when the population is small
- Necessary when the elements are quite different from each other

Sample  must be valid and representative


- Accuracy: to degree to which bias is absent in the sample  An accurate (unbiased)
sample is a sample where the under estimators and over estimator’s balance each
other. Systematic variance: the variation in measures due to some known or
unknown influences that cause the scores to lean in one direction more than another.
- Precision: A sample must have a sampling error that is within acceptable limits for
the studies purpose. It is measured by the sampling error of estimate, a type of
standard deviation measurement, a small standard error is ideal.
6.4 types of sample design

Representation basis:
- Probability sampling: Each population is given a non-zero chance of selection
(random)
- Non-probability sampling: is non-random and subjective, each member does not
have a known non-zero chance of being selected.

Probability samples give a higher estimate of precision

Element selection:
- Unrestricted: Simple random sample each population element has a known and
equal chance of being selected for the sample  accomplished by computer
software or a table of random numbers
- Restricted: All other forms of sampling

6.5 Steps in Sampling design

1. what is the relevant population: Who or what do you want to investigate?


2. what are the parameters of interest?
- Population parameters: summary descriptors (mean, variance, proportion)
- Sample statistics: descriptors of the relevant variables, are used as estimators of population
parameters.

Nominal or ordinal scale  Sample proportion of incidence to estimate population


proportion

Interval or ratio scales  Sample mean and sample standard deviation to estimate
population mean and deviation.

Population proportion of incidence: Equal to the number of elements in the population /


total elements in population (percentage of elements of population)

3. what is the sampling frame?


It is the list of elements from which the sample is drawn.

4. what is the type of sample?


Probability versus non-probability

5. what sample size is needed?


- The higher the dispersion of variance, the larger the sample must be to provide accuracy
- The greater the desired precision of estimate, the larger the sample
- The greater the number of sub-groups within an estimate, the larger the sample
- if calculated sample size exceeds 5% of total population, sample size may be reduced
without sacrificing precision.

Researchers can never be 100 percent sure the sample measured reflects its population,
therefore precision is measured by:
- Interval range in which they expect to find parameter estimate
- Degree of confidence they wish to achieve

6. How much will it cost?


Simple random samples are most expensive, telephone interviews least.

6.6 Complex probability sampling


Simple random sampling is often impractical
- There is not a complete list of population available
- It fails all the information about a population
- It may be too expensive

A more efficient sample in a statistical sense is one that provides given precision for a
smaller sample size (standard error of the mean)

4 alternative approaches
- Systematic sampling
Pick every k’th element of the population.
Identify the total elements, identify the sampling ratio (K/desired sample), Identify the
starting position, draw a sample by skipping and choosing each k’th ratio
- Stratified sampling
Segment population in different strata or sub-groups, university students can for
example be divided by class level, school or specialism. Afterwards a simple random
sample can be taken from the strata’s.

Proportionate versus disproportionate sampling:

Proportionate  Each stratum is properly represented so that the sample drawn from it is
proportionate to the stratums share of the total population.
- It has higher statistical efficiency than a simple random sample
- Easier to carry out than other stratifying methods
- Provides a self-weighting sample; the population mean or proportion can be
estimated simply by calculating the mean or proportion of all sample cases.
Disproportionate  Considering how a sample will be allocated among strata. Take a larger
sample if the stratum is larger than other strata.
- When differences among variances in strata are large, or the sampling costs differ.
Disproportionate sampling is desirable.

- Cluster sampling
With cluster sampling we divide the population in many small sub-groups, the sub groups
are based on several criteria. The sub groups are homogenic and contain heterogenic
elements.  provides unbiased estimate of population parameters.
Two conditions foster the use of cluster sampling:
 The need for economic efficiency, it’s cheaper than simple random sampling.
 The frequent unavailability of a practical sampling frame for individual elements.
Statistical efficiency is lower for clusters since the groups are usually homogeneous. With
simple random samples the groups are heterogenic. But the economic efficiency is
usually great enough to overcome this (it’s cheaper).

 Criterion is that the relative efficiency and the economic and statistical factors have to
overcome simple random sampling.

- Area sampling
Populations that can be identified with some geographic areas, it’s the most important
form of cluster sampling. Overcomes both problems of high sampling cost and
unavailability of a practical sampling frame.

Designing cluster samples  In order to design cluster samples we must answer several
questions:
- How homogeneous are the clusters: when clusters are homogeneous, this
contributes to low statistical efficiency? One can improve this by constructing
clusters to increase cluster variance (heterogeneity).
- Shall we seek equal or unequal clusters (size): The sample means of clusters are
unbiased estimates of the population mean. This is more likely when clusters are
equal. To ensure this the following approaches can be met:
 Combine small clusters, split large clusters until all have an average size.
 Stratify clusters by size and choose from each stratum.
 Stratify clusters by size and then sub-sample using varying sampling fractions to
secure an overall sampling ratio.
- How large a cluster shall we take? Not clear which size is superior, depends on the
efficiency, variances of means and costs.
- Shall we use a single-stage or multi-stage cluster: For most area sampling, the
tendency is to use multi stage clusters.
- How large a sample is needed: Depends on the cluster design.
 Simple cluster sampling: Single-stage samples with equal size clusters, the only
difference between a simple cluster sampling and a simple random sample is the size
of the cluster.

For example: I want to interview a specific neighbourhood. In this neighbourhood there are 7
streets. Each street can be divided in a cluster, so 7 clusters. These streets are somehow
heterogenic in terms of variables (age, gender, social status, income). Then I use simple
random sampling (SRS) to randomly assign one of the clusters that I’m going to use for my
interview. Resulting in for instance number 4 by SRS, so cluster (street) number 4 and its
elements will be used for the interview!

Multistage Keep dividing your clusters into smaller groups by using SRS.

- Double sampling (Sequential sampling or multi-phase sampling)


Collecting data from a sample using previously defined techniques. Based on this
information, a sub-sample is selected for further study.

6.7 Non-probability sampling  Non random

Does not operate from a statistical theory, often produce selection bias (since it is non-
random) and non-representative samples (does not say anything for the total
population).

Bias of non-probability samples can be reduced by:


- Post-stratification: Requires that we have information of the elements (age, gender)
or firm (size, industry) to make a stratification.
- Propensity scoring: Does not require information on the whole population, but a
second sample from previous research is believed to be more representative.
 Comparing your sample to the second sample of previous research allows you to
calculate propensity scores.

Usefulness of probability sampling: depends on the objective, if you want to generalize or if


you are interested in effect size.

Usefulness of non-probability samples: When the population is not available, probability


sampling is not feasible (since it is not random anymore) or if you are interested in a positive
or negative effect.

Convenience sampling  Researchers have the freedom to choose whoever they can find.
 does not guarantee precision, but might be useful (evaluation of a department)
Purposive sampling
- Judgement sampling: When the research selects sample members conform criterion
(he judges who is in and who isn’t)
- Quota sampling: To improve representativeness. For example, a high school consist
of 40% female and 60% male. When conducting a sample, you should also contain
the same ‘quota’ in selecting, so in a sample of 10 choose 4 females and 6 males to
represent the population in some matter.

Quota control:
Precision control: when you have 6 factors (control variables, catholic etc..) .

Frequency control: With this type of control you use the percentage of the population in the
percentage of your sample.

Problem with quota sampling  Its representativeness, the data that is available for control
may also be outdated or inaccurate (the frequency in population might be different).
 Despite its problems, its used a lot, it is cheaper and faster than probability sampling.
While there are dangers in bias and representativeness, the risks are usually not that great.

6.8 Sampling on the internet


 Narrows the population to those that have access, but has a lot of information available.

Internet sampling: Might give an unrepresentative conclusion for the entire population
(elderly are unrepresented on the internet) and respondents might be voluntarily. Internet
sampling does have a lot of advantages, its fast and cost effective.

Chapter 7 Primary data collection: surveys

Primary Data: Data collected for current study


Secondary data: Data collected for another purpose than the current study (already existing
data)

Communication and observations:


Communication and observation are often seen as a qualitative approach; however, it could
also be quantitative.
- Quantitative observation: Structured observations 
- Quantitative communication: face-to-face structured interviews, phone surveys,
web surveys.
- Qualitative observation: Participant observation 
- Qualitative communication: in-depth individual interviews, semi and un-structured
interviews, focus groups etc..

The tools used differ according to the data to be collected.


- Survey research: highly structured, the respondent has to choose which answers he
fits best.
- Unstructured or qualitative interviews: Respondents and interviewer are not really
bound to questions and answers, can speak freely.

7.2 characteristic of the communication approach.


Great strength of survey = its versatility (veelzijdigheid)
Weakness = quality and quantity depends on the respondents willingness and ability.

- Personal interviews
- Telephone interviews
- Self-administered surveys

7.3 choosing a communication method.

Requirements for success:


1. The participant needs to possess the information regarding the topic and why they are
being questioned
2. The participants must understand his role in the interview.
3.must perceive adequate motivation

Information  the interviewer can do little about the information level of the participant in
order to answer, what he can do is ask screening questions, to determine if the participant
can answer all question regarding the topic.
Motivation  in telephone and personal interviews it’s the responsibility of the interviewer
to motivate the respondent. In web based surveys and self-administrated.

Increasing participant’s receptiveness  motivation in an interview relies on a lot of factors,


the way in which questions are asked. Emotional, formal
- Establish a friendly relationship
- The participant must have the feeling that his interview will be pleasant
- That the outcome is important to the research
- Dismiss any mental reservations

Non response error  this is bias, occurs when participants do not answer or do not
successfully answer.

Reducing error of non-response


 call back procedures
 Non-response sample and weighting results from this sample
 substituting another individual for the missing participant

Call back  depends on time of day and week


Weighting non-participants will be treated as a new subpopulation. SRS from this group.
Substitution if it is absolutely necessary, in household’s u can ask someone else from the
specific household.

7.4 communication methods compared:

Mixed mode  when you don’t find a suitable method for your study, you can combine
methods.

7.5 personal interviews

Face to face interviews  two-way communication participant and interviewer.

Advantages:
- In depth
- Can improve quality of info received (further questions)
- More control, pre-screening and set up location
- Adjust languages

Disadvantage
- Costly in time, money
- Talking to strangers can be difficult
- Questions can be altered  bias
- Response errors

Interviewing techniques:
The interviewer needs to assure (learn) that the information obtained will answer the
questions objectives.

Probing: technique of stimulating the participant to answer more fully and relevantly
- A brief understanding and interest (yes, I see, uh, aha)
- An expectant pause
- Repeating the question
- Repeating the participants reply
- A neutral question or comment, what do you mean?
- Question clarification

 For example, if the participant answers ‘I don’t know’ try to use probing techniques to get
an answer from the participant.

Recording the interview: writing down the response, repeat and use special instruments if
applicable.

Response errors: when data reported, differs from actual data.


 Participant initiated error: Participant fails to fully answer the question and accurately.
 Interviewer error: the quality of data can be influenced by the interviewer by all sorts of
factors.

7.6 Telephone interviewing

Advantages: moderate costs and international and national possibilities (reach)


- CATI Computer assisted telephone interviewing: Immediate entry of responses
- Computer administrated telephone survey is another mean of securing data
immediately  No interviewer only the computer. Refusal rate is higher
- Interviewer bias is reduced!

Disadvantages:
- No telephone service households
- Inaccurate or non-functioning phone numbers
- Limitations on interview length
- Limitations on use of visuals.
- Ease of termination (hanging up the phone)
- Less participant involvement, experience
- Distracting physical environment (when in the car or walking in the city)

Telephone interview trends:


- Answering machine and multi lines will affect sampling.
- Caller ID technology will send a disconnect sign when calling.

7.7 Self-administered surveys


 service evaluations or mail surveys

Advantages:
- Costs
- Sample accessibility: through mail
- Response time: postpone their responses can ensure better quality but also likely for
bias (non-response)
- Anonymity  mail surveys are impersonal, providing anonymity.
Disadvantages:
- Topic coverage: type and amount of data that is included, often very short. Long
surveys often require an incentive or personal benefit.
- Non-response error

Reducing non-response:
- Follow ups and reminders
- Preliminary notification  advanced notification by phone or email
- Concurrent techniques:
 Money incentives works very well! Or reporting of findings to participant
 Short questionnaires are better than long in response bias
 Respected sponsorships increase response rate

Drop-off system  a lightly trained interviewer personally delivers the surveys and picks
it up.

Business research  when conducting research/surveys on an organization you should


keep in mind that the capacity and authority of employees might differ in order to
answer surveys.

7.8 Web-based surveys.


 online form of self-administered surveys.

Target web survey  Researcher is in control who receives survey.


Self-selected survey  When the participant chooses to participate, for example pop
ups on a screen or advertisements.
Social-media-based-surveys Facebook for example that helps to reach out to potential
participants.  kind of SNOWBALL effect that is happening

Web-based questionnaires Has power of CATI but without expense of network


adminstrators.

7.9 Structured observations:

- Only method that can be used to obtain info from people who cannot talk or read.
Like young children and animals.
- Collect data at time it occurs reducing retrospective biases  people forgetting
things after a week or their opinions change
- Reducing respondent bias.
- Method reactivity biases  when respondents change their behaviour when they
know being observed
- Capture in natural habitat, no bias from environment (laboratory)
- Observation is less demanding than questioning.

Limitations:
- Must be at the same scene of happening.
- Slow and expensive process
- Restricted to information that require training and education, for example behaviour
or values, opinions  combining with interviews is a solution
- Is restricted to current time, not the past nor future
Direct observations:
When the observer is physically present and personally monitors.

Indirect observations:
When recording is mechanical, photographic or electronic.

Concealment to shield themselves from subject (mirrors). Reduce risk of bias, but brings up
ethical questions.

Partial concealment  presence of observer is concealed but subjects aren’t.

Behavioural and non-behavioural observations

Non behavioural
Record analysis: Historical or current records, and public or private records.
Physical condition analysis: inventory condition analysis, or studies of plant safety.
Process (activity) analysis: for example, manufacturing processes or traffic flows.

Behavioural:
- Non-verbal behaviour: Body movement
- Linguistic behaviour: Sounds made in a class (ahs and uhs)
- Extra linguistic: Vocal, pitch loudness and timbre, vocabulary, pronunciation,
characteristics expressions  Unabomber example
- Spatial relationship: How a person relates physically to others (proxemics, concerns
how people organize the territory around them)

Measurement in structured observation:


Specific conditions, events or activities that we want to observe determine the observational
reporting system
To specify the observation content, we should include both the major variables of interest
and any other variables that may affect them
From this cataloguing, we then select those items we plan to observe
For each variable chosen, we must provide an operational definition it there is any question of
concept ambiguity or special meanings.

Factual and inferential observations


Factual observations are direct descriptions of what is happening and what can be seen
Inferential observation translates what is seen to a concept that cannot be observed

See exhibit 7.12. p. 235

Example factual observation: counting the number of people with sweaty foreheads at an airport
example inferential observation: counting people who are potential bearers of the swine flu by
looking at sweaty foreheads.
Observations of physical traces. Some very innovative observational procedures that can be both
non-reactive and inconspicuously applied, like:
Unobtrusive measures
These approached encourage creative and imaginative forms of indirect
observation, archival searches, and variations on simple and contrived observation.
Of particular interest are measures involving indirect observation based on physical
traces that include erosion (measures of wear) and accretion (measures of deposit)
Physical trace methods present a strong argument fur use bases on their ability to
provide low-cost access to frequency, attendance and incidence data without
contamination from other methods or reactivity from participants.

They are excellent ‘’triangulation’’ devices for cross-validation

Conducting structured observations


Designing the observational study requires us to answer the Who, What, When, How, and
Where questions
For structured observations, checklists are a common method to record the data, usually
while you observe to limit memory-induced biases.

Developing a checklist is similar to designing a questionnaire and requires that we formulate


questions reflecting the variables of interest and corresponding answer categories allowing
quantifying what is observed.

Chapter 8 Primary data collection: Qualitative data

Semi-structured interviews & unstructured: Has a specific topic list and format.  when
you want to know something from the participant concerning a topic.
structured: Has a specific order, closed questions. Basically a quantitative method of
collection.  predefined questions.

Instruments: questions in semi structured and unstructured interviews

 Interview guide: Serves as a memory list for interviewer and ensures questions are asked
the same way.

Question types:
- introductory questions  thank you for coming here blabla, how are you? Can you
tell me about…..
- follow-up questions  what do you mean by that?
- probing questions  do you have an example (trying to get an answer from
respondent)
- specifying questions  could you elaborate on that?
- Direct questions  what is your PoV?
- Indirect questions  what do people around here think?
- Structuring questions  moving to the following topic
- “silence”
- interpreting questions  do you mean that?
Information recording
 Recorded by tape or digitally
 Two interviewers, one take notes, the other questions or both question and interpret.

Interviewer qualifications:
- direct the interview, give guidance
- expert in the field
- probing respondents

Active listening  be a good listener and a good questioner

Focus group
 qualitative form of interviews with a group, elements of population
 led by moderator, moderator is in charge for probing, ideas and feeling in the group

Homogeneous groups are more common than hetero. However, too homogeneous might
lead to a group being too equal or too dominant.

Online focus groups  enable to hold focus groups overseas anytime of the day,
Synchronous  at the same time through video
Asynchronous  through a panel, email or forum. Disadv  no facial expressions.
Disadvantage: only for people with internet (becomes less and less a problem)

Participant observation: Fully dive into the world they want to research (documentary)

Conducting observational studies:


- prepare an observational study (define content and observational targets)  you
need to be well informed of the topic before conducting observations.
- Secure and train observers  observers need to be trained, when working in a group
everyone needs to have the same level of information.
 Concentration: function in a setting full of distractions
 Detail-oriented: ability to remember
 Unobtrusive: blend in setting (not an attention grabber, for instance a hot girl)
 Experience level: ability to extract the most from observations.
- Collect the data
- Analyse the data

Most observers are subject to fatigue, halo effect and observer drift, which refers to a loss in
reliability and validity that effects coding
 observer trials with the instruments should be used until high degree of reliability and
validity is achieved.

Data collection:
- Who  What requires a participant to be observed. Who carries the responsibility
on ethical level?
- what  unit of analysis, characteristic of the observation (act). Event sampling 
multiple events (acts). Time sampling  time interval or continuous timeframe.
- When  on what moment?
- How  Field noted or checklist are the most common
- Where  Where does the act take place? Reactivity response  the respondent is
aware of the fact that he is being observer and his responses are biased (Hawthorne
effect)

Data analysis 2 approaches


 Triangulation  several points of view on the topic/data
 Discuss outcome with experts and participants.

Observer-participant relationship  2 dimensions


- Becoming actively involved in the participant’s world will allow to get a better
understanding of what is happening.
- Concealment  ethical issue (spying)

Conducting participant observations


 Key issue is who and what you should observe.

Field notes are primary tool for data collection


- Direct note: key words while observing
- Immediate full notes: Write full notes immediately
- Limit observation moment: Limit time on setting to fully disclose all information
- Rich full notes: very detailed, of all things you have encountered and observed so
that you can still understand it months after.

Chapter 9: Secondary data and archival sources.

The disadvantages of secondary data:


The main problem is that they were not collected with your research question in mind.
To assess secondary data, you need to address the following questions:
- Is the information provided in the secondary literature suitable to answer your
research questions?
a. Does it cover all the information you need?
b. is the information detailed enough?
c. Does it use the same definitions that you use in your research?
d. is the data accurate enough?
- Does it address the population that you want to investigate?
a. Does it refer to the same unit of analysis?
b. Is the sample taken a good representation of the population?
- Were the secondary data collected in a relevant time period?

If you secondary data cannot answer all questions above, the quality is questionable. For
example, if you want to assess a strategy concerning financial models and the secondary
data you have is examining a company’s liability, then you don’t have the right information.

Sample quality: The data stated in secondary data needs to address the same population. If
the secondary data used a non-probability sample in a high school, and in your research you
want to perform a SRS for the entire population, the data is not useful.

5 factors to evaluate sources:


- Purpose: Why does the info exist? Does it achieve its purpose?
- Scope: How old is the info? How often is it updated?
- Authority: What are the credentials of the author institution or organization that
sponsored the information?
- Audience: to whom does the information source cater? What level of knowledge is
necessary?
- Format: How quickly can you find required information? How easy is the information
to use? Is there an index?

9.2 Sources of secondary data

Internal written data: Invoices, memos


External written data: annual reports, newspapers and magazines.
Internal electronic data: Management info systems, accounting records
External electronic data: CD-roms, websites, databases.

9.3 how to use secondary data efficiently

- Merging multiple sources


- Adjusting research problem to available data
- Investigating which research problems can be investigated with the available data

9.4 Secondary data for qualitative research

Secondary data has a prominent role in qualitative research. For example: Case studies rely
on data sources, personal interviews with key people or internal documents.

9.5 Data mining

Describes the process of uncovering knowledge from databases stored in warehouses.


Purpose is to identify valid, novel, useful and ultimately understandable patterns in data.
 Can be seen as a very sophisticated tool for inductive discovery process.
 Data warehouse: Electronic repository for databases that organizes large volumes of
data.
 Data marts: compile locally required information

Pattern recognitions: For example, MasterCard analyses 12 million transactions daily, with
the use of data mining they try to detect fraud by pattern recognition.

Data mining process:


- Sample: Decide between census and sample data, entire dataset or a sample?
- Explore: identify relationships within data  Numerically or visually (data
visualization), look for outliers, sample size
- Modify: Modify or transform data (adjust research questions or data)
- Model: Develop a model that explains the relationships, neural networks, decision-
trees, classification, estimation, genetic based models.
- Assess: Test the models accuracy  Accuracy if it is complete and matches with your
data, reliability if the information is independent, reality check

Chapter 10: Content analysis and other qualitative approaches

10.1 Content analysis


 A technique based on the manual or automated coding of transcripts, documents, articles
or audio and video. Also used to analyse secondary data!

The primary objective of content analysis is to reduce the often copious information to a
manageable amount.
 Condensation and categorization.

The information from content analysis can be used to answer the following questions:
- What are the antecedents of media coverage? Why does media coverage vary
overtime?
- What are the characteristics of media coverage? Why do newspaper report positively
or negatively?
- What are the effect of media coverage? Why do newspaper report later than others?

Advantages:
- Adds to transparency: It is clear to readers what the researcher did
- Other can replicate your research
- Is unobtrusive and non-reactive (niet opdringerig/opvallend)

Disadvantage:
- Quality depends on input
- Coding and interpretations are subject to interpretation bias, I code differently than
others

Process of content analysis:

- Research problem
- Define population of sources and selection procedure  all sources or sample
sources? Do you want to compare them or just analyse?
- Coding procedure: Prescriptive (predetermined codes) or open analysis (coding
during the process)?
- Coding frame: Categorization and list of all codes used.

Software packages for analysis: QDA tools

10.2 Narrative analysis (narrative = story, subjective)

 Qualitative explorative approach for in-depth investigations. Allows the researcher to


understand phenomena from the respondent’s perspective. Used on in-depth interviews,
secondary data (biographies)

Structural analysis and key functions of narrative


Structural analysis emphasizes on how stories are told and involves linguistic and language
analysis. Key functions:
1. Abstract statement
2. Orientation segments: When, where and who is involved
3. Complicating action: builds up the sequence of actions and events
4. Evaluation: describes how storytellers assess the actions and informs the researcher about
their attitude.
5. resolution: describes what finally happened or what the conclusion is.
6. coda: insights in importance of story, indicate which phenomena occurred related to the
story.

Thematic analysis:
Focuses on the content of the narrative, what has been said. Main objective is to identify
common seams in a bundle of stories.
 Temporal organization of the story: cuts the story into smaller pieces and orders them
sequentially.

Interactional analysis:
Dialogues between storyteller and listeners. Accounts for the collaborative processes
between the two in constructing a story.

Considering context:
the narrative analysis is a qualitative method, it should incorporate the specific context in its analysis.
How a story is told might differ in across different times. The differences in the stories in different
contexts yield insights into the evaluation segments, and by considering the context we are better
able to understand the process as a whole.
Ethnographic studies
 Used to study business phenomena. Analysis will study on problem statement. Characteristic of
ethnographic study is its richness in the decription.

Elements of ethnographic study:


Multiple information sources: combine interviews with observations, informal talks etc.
Employing different perspectives: Obtain info from different information providers (departmental)
Record and present different types of information: Different types of research combined.
Qualitative interviews, with quantitative frequencies and for example observations.

10.4 Action research

Action research  Continuous interaction between researchers, participants and practitioners.


Objective is social change or production on socially desirable outcomes (for example new product,
assessment and adjustment)

Advantages:
Cares less about general principles and places strong emphasis on cooperation between researcher
and participants.

Disadvantages;
- Context dependent
- Researchers rarely have full control over the environment

10.5 grounded theory:

 Provides a general framework for conducting qualitative research. It starts from collecting data
and uses this data in an iterative process of coding. The grounded theory basically means that
researchers should reflect on previous coding and if they still are representative (reflecting).

Open coding: Conceptualization of words and phrases, all info is labelled with categories
Axial coding: Identify linkages between categories. Developing theoretical explanations.
Selective coding: After several rounds of axial coding. The researcher focusses on categories and
attempts to develop a new grounded theory (start all over)

Theoretical sampling: is not about representativeness like probability sampling, but more concerned
with which cases would be of additional relevance.

Theoretical saturation: is a stopping rule for qualitative research. You stop when data does not
provide new information.

Grounded theory 4 criteria that assess research:


- Fit: Describes how well the categories represent real incidents
- Relevance: Describes how useful a theory is for practise.
- Workability: Refers to quality of explanation: Does the theory work?
- Modifiability: Refers to if the data can be adapted if new data is compared to it?

Advantage grounded theory: provides a convincing framework for a systematic inquiry into
qualitative data.

Disadvantages: pre-theoretical thoughts can result in feasibility problems, it’s very time consuming
and criticized for not fiving new theories, but rather a categorization of data (F.e coding framework)

Chapter 11: Case studies

 Suitable for exploratory, explanatory and descriptive research.

A Case study is an empirical inquiry that investigates a contemporary phenomenon within its
real-life context. It is more an approach to investigate a phenomenon than a method to
collect data. Usually researchers combine methods. A case study is built upon interviews and
participant observations.

Instead of a sampling logic it conducts a replication logic, therefore not generalizable to a


population. The main idea of replication logic: The same phenomenon would happen
under the same circumstances, or that the phenomenon differs in other situations.

Advantages of case study compared to other approaches:


It relies on multiple sources of evidence, such as interviews, observations and documents.
It allows the consideration (beschouwing) of the specific context.

Disadvantages of case study:


Not generalizable to a population.
Big chance on bias.

Objective of case study: Detect patterns and explanations. The objective is to understand a
real problem and to gain insights in developing new explanations and theories

For example; I want to know what company’s latest products were and if they implemented
new innovations. Compare the latest products from companies and look for patterns and
explanations (cases).

Single and multiple case studies:


 Single: Just one single case, for investigating extreme or unique cases (no previous studies
available) or you use a case study that has implemented several in previous.
 Multiple: several cases. You have to think about which cases you will use.

11.2 The richness of evidence sources.

- Interviews (unstructured, structured and semi-structured)


- Documents and archives: in many forms; letters, report, journals, agendas etc.
- Observation: Direct (researcher is part of observation) and participant observation

11.3 How to conduct good case study research?


Quality Not the approach but how the study is conducted.

8 criteria for good-quality case study research:


 Purpose clearly defined
 Research process detailed (who you interview, or what documents you will use)
 Research design thoroughly planned (explain clear thinking behind selection)
 High ethical standards applied (protecting rights of participants, sponsor etc.
 Limitations frankly revealed (to what extend the case study fulfils the need)
 Adequate analysis of decision-makers’ needs (how you assess info and how you
combine them)
 Findings presented unambiguously (use tables and graphs and include all details)
 Conclusion justified (Conclusions supported by findings)

Chapter 12 experimentation

12.1 what is experimentation?

Causal methods: Why do events occur under some conditions and not under others, these
are called causal methods.
Ex post facto  A researcher interviews respondents or observes what is and what has been.
This method also discovers causality. Through experimentation the researcher can alter
variables and observes what changes.

Questions of internal and external validity:


- Does the experimental treatment determine observed difference or was some
extraneous variable responsible?
- How can one generalize the results of a study across times, setting and persons,

Experiments: are studies where the researcher intervenes the measurements (by altering a
variable for instance)

Intervention: Usually the researcher manipulates the independent variable (IV) and observes
how it has its effect on the dependent variable (DV) IV causes DV to occur

3 types of evidence form the basis for causal relationships:


- IV and DV are correlated (presence/absence of one results in present/absence of
other)
- Time order, the DV should not occur before IV but they may occur simultaneously.
- Other variables should not be responsible for an effect in DV

12.2 An evaluation of experiments

Advantages:
- The researcher’s ability to change/manipulate IV  Control group (serves as
comparison with experimental group) and pre, and post-test (measurement before
and after manipulation).
- Ability to control extraneous variables, location or environmental variables.
- Low cost of creating test situations
- Replication leads to discovery of an average effect of the independent variable across
people, situations and times.
- Ability to exploit naturally occurring events to reduce subject’ perceptions of the
researcher as a source of intervention or deviation.

Disadvantages:
- Artificially or not-natural environment might have an effect on subjects.
- Generalization from non-probability samples can pose problems despite random
assignment
- Experimentation equipment’s can be expensive
- Experimental studies of the past are not relevant anymore
- Experiments are used for studies with people, could pose ethical limits.

12.3 conducting an experiment

In order to make an experiment successful the researcher should:


- Select relevant variables  Hypothesis, (1) select variables that best operational
represents the original concept. (2) Determine how many variables to test. (3) select
or design appropriate measures for them.
- Specify the levels of treatment: Specify the variable in different levels of treatment.
For instance the variable, Salary  levels of treatment: High, Middle, Low salary.
- Controlling the experimental environment: Environmental control: Holding constant
the physical environment of the experiment. Blind is when the subjects do not notice
that they are given treatments. Double blind is when both the experimental and
control group don’t know they are being given treatments.
- Choosing the experimental design: Serves as positional and statistical plan to
designate relationships between experimental treatments and observations.
- Selecting and assigning subjects: Random sampling is used in the same was as for a
survey, randomization technique (SRS). Matching  when randomization is not
possible (looking for similar characteristics in possible subjects)
- Pilot testing, revising and testing: Pilot testing is intended to reveal errors in an early
stage. Pre-testing is to permit refinement in the final stage.

12.4 Validity in experimentation

internal validity  Are the conclusions we draw really causally related? Did the
measurement instrument really measure what it is designed to do?

External validity  does an observed causal relationship generalize across people, settings
and times? (count for/rely on)

Internal validity 7 threats:


- History  During the time that an experiment is taking place, some events may occur
that can alter the experiment. Therefore, you can take control measurement before
and after the alter of variable.
- Maturation: Subject can become tired or hungry, or change its view in a specific
timeframe.
- Testing: Experience in taking tests can alter the outcome between the two
- Instrumentation: changes in measurement instrument or observer can change the
outcome
- Selection: Selection of subject, the group should be equivalent in every respect.
- Statistical regression: Based on extreme cases and outliers. For example, when you
are in a bad mood scores can fluctuate.
- Experiment mortality: When subjects of the group leave.

Other important threats:


- Diffusion or imitation of treatment: Conversation between experimental and control
group
- Compensatory equalization: When the experimental group’s treatment is way more
desirable, this might ask for a compensation for the control group to equalize the
effects.
- Compensatory rivalry: when the control group know they are the control group.
They will try harder.
- Resentful demoralization of the disadvantaged: when the treatment is desirable and
the experiment obtrusive (opvallend). Control groups might lower their cooperation
and output.
- Local history: When experimental groups and control groups are assigned together.
This can be solved by randomly assigning the groups.

External validity:
The interaction of the experimental treatment with extraneous factors and the resulting
impact on the ability to generalize to times, settings or persons.

Interactive possibilities (threats):


- The reactivity of testing on X: When the subject is given a pre-test, he might respond
to stimulus X in a different way.
- Interaction of selection and X: The selected subjects of a population may not be the
same as the population to which one wishes to generalize results.
- Other reactive factors: the setting the subjects are in may alter their response. Or
when they know they are in an experiment, they may differ their responses
(Hawthorne)
Internal validity can be solved by the careful design of the experiment. External validity is a
matter of generalization  the closer the events are in time, space and measurement, the
more likely this is.

12.5 Experimental research design

True experimental designs  Has an experimental group and a control group, these 2
groups are equal through randomly assignment or matching.

Pre- and post-test design (within subject design)


First measurement is taken before the treatment, the next after. It does not meet the
requirements of a true experiment since the testing is within a subject, not between.

Post-test only control group design (between subject design)


The pre-test measurements are leaved out, not necessary when randomize is possible.
Threats are reduced since there is only one-time testing. (Maturation, history, selection,
statistical regression is not present). Mortality rate is still an issue.

Pre-test – post-test control group design (Experimental & control group, random
assignment)
Very effect approach, deals with the 7 major internal validity problems.

Factorial survey/vignette research: researcher presents the subject with a brief and explicit
description of a situation (fact) and then asks him or her to assess the situation or to make a
decision.

Field and quasi experiments

Laboratory experiment: the research has the ability to control every variable.

Field experiments; conducted in a natural setting, participants don’t know that their
behaviour is being observed.

Disadvantages of field experiments: researcher cannot manipulate variables to an extent.


They also raise ethical questions.

Quasi experiment: often cannot know whom to expose the experimental treatment to. Is
inferior to a true experimental design, but is usually superior to pre-experimental designs.
Useful in studying well-defined events (natural disasters, a new law)

Non-equivalent control group design: the test and control groups are not randomly assigned.
Time-series design: repeated observations before and after treatment, allows subjects to act
as their own control (good way to study unplanned events)

Factor: Used to denote an independent variable., factors are divided into level, which
represent sub-groups. Male/female etc..

Active factors: can be manipulated


Blocking factors: can be identified and classified on an existing level, gender, age, group etc..
Chapter 13: Questionnaires and responses

13.1 developing the instrument design strategy

Instrument design phases:


Phase 1. Review of management question hierarchy and developing design strategy
Phase 2. Constructing and refining the measurement questions
Phase 3. Drafting and refining instrument.

13.2 Phase 1: Management research question hierarchy revisited

From management dilemma to measurement question goes through 4 question levels:


- Dilemma question  Dilemma stated in question form, what the researcher wants to
solve
- Research question  the fact-based translation of question the researcher must
answer in order to solve the management question
- Investigative question  Specific questions that will provide information to answer
the research questions
- Measurement question  Questions participants answer to gather the required
information to resolve management questions.

4 Questions to plan a successful survey:


- Data type: Nominal, Ordinal, ratio, interval
- Communication approach: Personal interview, telephone surveys, mail, computer or
combination.
- Question structure: Administrative (identify participant, interviewer, location,
conditions), Classification (Demographic variables), Target (address the investigative
questions: Structured, Unstructured, Combination of structured & unstructured. In-
depth interviews and focus groups are widely used as well.
- Disguising objectives and sponsors: Disguised question is used to hide the real
intention the question has (if you want to conceal the sponsor, or true purpose).
- Preliminary analysis plan: Serves as a check whether the planned measurement
questions meet the data required to answer the management questions.

Disguising the study objective is or is not an issue:


- Willingly shared, conscious level: Disguised or undisguised
- Reluctantly shared, conscious level: often disguised, projective technique
(participants may not wish to reveal their true feeling or may give stereotypical
answers)
- Knowable, limited conscious-level: often disguised. Participants may hold a certain
attitude but it is not clear what causes this attitude
- Subconscious-level: Seeking insight in the basic motivations, underlying attitudes or
consumption practices may or may not require disguised techniques.

13.3 Constructing and refining the measurement question  Phase 2

In this phase you draft a specific instrument design with administrative question, target
questions, classifications and measurement questions.

A quality communication instrument should accomplish the following tasks:


1. Encourage each participant to provide accurate responses:
A quick answer is often closer to what a respondent really thinks than a well elaborated one.
2. Encourage each participant to provide an adequate amount of information
Some respondents are very talkative and some aren’t. The interviewer should be skilled in
capturing information and encouraging the participants by probing or repeating questions
3. discourage each participant from refusing to answer specific questions:
Some questions are sensitive subjects and response refuse is imminent. Emphasizing the
importance of the information and that the information will be confidential will increase the
response.
4. Discourage each participant from early discontinuation of participation:
This applies mainly to mail questionnaires. In interviews this chance is smaller since the
interviewer can convince the participant to continue.
5. Leave the participant with a positive attitude about the survey participation:
Close surveys with an appreciation of the participants and willingness to cooperate,
increasing the positive attitude they have afterwards.

13.4 Question content

4 question that guide question content and issues:


- Should this question be asked?  Purposeful versus interesting
- Is the question of proper scope and coverage?  Incomplete or unfocused, multiple
questions. double barrelled questions: two or more questions in 1, precision (does
the question ask precisely what we need to know?)
- Can the participant adequately answer this question, as asked?  Time for thought,
participation at the expense of accuracy, presumed knowledge, Recall and memory
decay, Balance (general versus specific), objectivity
- Will the participant willingly answer this question, as asked?  sensitive
information

Question wording:
It is frustrating when people misunderstand a question, most of the time due to a lack of
vocabulary. Criteria to assess your questions on wording:
- Is the question stated in terms of shared vocabulary?
- Does the question contain vocabulary with a single meaning?
- Does the question contain unsupported or misleading assumptions?
- Does the question contain biased wording?
- Does the question contain double negations? (ontkenningen)
- Is the question personalized correctly?
- Are adequate alternatives presented within the question?

13.5 Response strategy

Unstructured response: Open-ended response


Structured response: Closed response (yes, no)

Several situational factors affect the decision of whether to use open ended or closed
questions. The decision is also affected by the degree to which the following factors are
known to the interviewer:

- Objectives of the study


- Participants level of information of the topic
- Degree to which participants has though through the topic
- Ease with which participant communicates
- Participants motivation level to share information

Exhibit 13.7 p 168 for response strategies

13.6 sources of existing questions


You can use pretested questions of other research or borrow items from existing sources.
This does not happen without a risk  language, phrasing and context might differ for your
own research. Therefore, pretesting is recommended.

13.7 Drafting and refining the instrument  Phase 3


Multistep process:
1. Develop the participant-screening process, along with introduction
2. Arrange the measurement question sequence
3. instructions
4. conclusion

Participation screening  Screen question, to determine if the potential participant has


enough knowledge necessary.

Branch question  content of one question, assumes other questions have been answered.

Funnel approach  From general to more specific questions.

Pre-testing options:
Researcher pre-testing:
Participant pre-testing:
Collaborative pre-testing: Inform participants it’s a pre-test
Non-collaborative pre-testing: Do not inform participants it’s a pre-test

Chapter 14 The nature of measurement

Measurement: To discover the extent, dimensions, quantity or capacity of something


especially by comparison with a standard. Measurement in research consists of assigning
numbers to empirical events in compliance with a set of rules.

Measurement is a 3-part process:


1. Selecting observable empirical events
2. Developing a set of mapping rules: A scheme for assigning numbers or symbols to
represent aspects of the event being measured
3. Applying the mapping rules to each observation of that event.

Mapping rule example; Assign M; if male, F; if female

The goal of measurement: to provide the highest-quality, lowest-error data for testing
hypotheses.

Researchers formulate hypotheses, then they measure if the hypotheses are true or false. An
important question at this point is  What does one measure?
- Objects: Ordinary experiences  Tables, people, books, cars, opinions, peer-group
pressures
- Properties: Characteristics of an object
physical properties  Weight, height, posture.
Psychological properties  Attitudes and intelligence

In literal sense, researchers measure indicants of properties or indicates of the properties of


objects. The quality of a study depends on what measures are selected or developed, and
how they fit the situation.
14.2 Data types

Mapping rules have 4 characteristics:


- Classification  Numbers are used to group or sort responses, no order exists
- Order  Numbers are ordered and transitivity (continuous, causal, related) so: A is
greater, less than, equal to.
- Distance  Differences between numbers are ordered.
- Origin  The number series has a unique origin indicated by the number zero

These 4 characteristics result in 4 widely used measurement scales:


- Nominal  Categorization, usually can be grouped in two or more categories.
Researcher uses the Mode to calculate the measure of central tendency.
- Ordinal  Characteristics of nominal scale but then with an indicator of order.
Opinions for example (disagree-neutral-agree). The measure of tendency is Median.
- Interval Nominal and ordinal combined plus it incorporates the concept of equality
of interval (1-2 is not equal to the distance of 2-3). Arrhythmic mean for tendency
measurement. (temperature)
- Ratio  All the powers of previous, plus the provision of absolute zero. Company’s
profit for example.

14.3 Sources of measurement differences

Systematic error Results from bias


Random error Occurs inconsistent

Error sources:
- Participant  Comes from a variety of factors known as, testing, history, knowledge
etc…
- Situational factors Situations that places a strain on the interview or
measurement. F.e, another person that influences the participant.
- Measurer  The researcher/measurer can distort responses by checking the wrong
output, careless mechanical handling, incorrect coding, etc..
- Data collection instrument: A defective instrument can cause distortion in two ways:
It can be too confusing and ambiguous to use, and the second is that it might
measure specific data in another way than you would like.

14.4 Characteristics of sound measurement

Validity The extent to which a test measures what we wish to measure


Reliability The accuracy and precision of a measurement procedure.
Practicality Economy, convenience, interpretability

1. Validity

ExternalThe ability to generalize across persons, settings and times.


Internal The ability and extent to which a research instrument measures what it is meant
to measure.

A. Content validity: The extent to which the instrument covers the investigative questions
guiding the study. If the instrument contains a representative sample of the universe of
subject matter of interest, then content validity is good. To evaluate content validity of an
instrument, one must first agree on what elements constitute adequate coverage.
If the data collection instrument adequately covers the topics that have been defined as
relevant dimension we conclude that the instrument has good content validity.
Determination of content validity is judgemental
1. Designer may determine it through careful definition of the topic concerned.
2. Use a panel of people to judge how well the instrument meets the standards.

B. Criterion related validity

Reflects the success of measures used for prediction or estimation. You may want to predict
or estimate the existence of a behaviour. 4 qualities that judge criterion validity:

- Relevance
- Freedom from bias
- Reliability
- Availability

A criterion is relevant: In making decisions you must rely on your own judgement in deciding
what partial criteria are appropriate to measure what you want to measure.

Its free from bias when the criterion gives each person/element an equal opportunity to
score well.

A reliable criterion is stable or reproductive. When a criterion is unreliable, but the only
option available. It is possible to use ‘correction for attenuation’ formula that lets you see
what the correlation between the test and the criterion would be if they were made perfectly
reliable

Finally, the information specified by the criterion needs to be available, if not how will you
access it is and how much will it cost?

The usual approach in testing criterion validity is by correlating them.

2. Construct validity
If you want to measure abstract characteristics for which no empirical validation seems
possible. To evaluate construct validity, we consider both theory and the measurement
instrument being used. Once the theory is suckered, we would investigate the adequacy of
the instrument.

 Attempts to identify the underlying construct being measured and how well the test
represents it (them)

a. Reliability
Reliability is concerned with estimates of the degree to which a measurement is free of
random or unstable error. Reliable instruments are robust; they work well at different times
under different conditions. This distinction of time and condition is the basis for frequently
used perspectives on reliability: stability, equivalence and internal consistency.

Stability in reliability (test-retest in multiple measurements)


A measure is reliable when you can secure consistent result with repeated measurements of
the same person with the same instrument. Some errors that can occur in the test retest
methodology and cause a downward bias in stability include:
- Time-delays between measurements  situational factor
- Insufficient time between measurements  permits the participant to remember
precious answers and repeat
- Participants opinions related to the purpose but not assessed with the current
questions 
- Topic sensitivity the participant learns more about the topic
- Introduction of extraneous moderating variables between measurements 
extraneous factors resulting in a change in opinion by participant.

Equivalence in stability/reliability (multiple measurers)


How much error may be introduced by different investigators or different samples.
Equivalence is concerned with variations at one point in time among observers and samples
of items. A good way to measure equivalence is to compare different observers’ scores of
the same event.
The major interest with equivalence is typically not how participants differ from item to item
but how well a given set of items will categorize individuals. There may be many differences
in response between two samples of items, but if a person is classified the same way by each
test, then the tests have good equivalence

Internal consistency in stability/reliability (multiple items)


Homogeneity among items. The split half technique can be used when the measuring tool has many
similar questions or statements to which the subject can respond. The instrument is administrated
and the results are separated by item into even and odd numbers, or into randomly selected halves.
When the two halves are correlated, if the results of the correlation are high the instrument is said to
have high reliability in an internal consistency sense. The high correlation tells us that there is
similarity among the items.

Improving reliability
- Minimize external sources of variation
- Standardize conditions under which measurements occurs
- Improve investigator consistency by using well-trained, motivated, supervised persons
to conduct research
- Broaden the sample of measurement questions used by adding similar questions to
the data-collection or adding more observers
- Improve internal consistency of an instrument by excluding data from analysis drawn
from measurement questions eliciting extreme responses.

3. practicality

a. Economy
economic factors such as costs  based on instruments used, measurement questions, data
collection method (personal interviews are more expensive than online surveys).

b. convenience
A measure device passes the convenience test if it is easy to use and to apply.

c. interpretability
when other people than the researchers need to interpret the data. This includes guides on
how to read and interpret the data, evidence about reliability, correlations and scores,
detailed instructions.

14.5 The nature of measurement scales.

Standardized scales  when what you measure is concrete


Custom designed scales  when what you measure is abstract and complex (construct)

Scaling the procedure in which we give numbers/symbols to a property of objects, in


order to give some of the characteristics of numbers to the properties in question.

Scale selection:
Selection or construction of measurement scales requires decisions in six key areas:
- Study objective: To measure characteristics of participants or to use participants as
judges on objects or indicants presented to them.
- Response form: Rating when participants score an object or indicant, ranking when
you compare scores among two or more indicants of objects, categorization where
participants put themselves or property indicants in groups are categories
- Degree of preference: Preference (choose an object he or she favours) and non-
preference (asked to judge, without reflecting preferences)
- Data properties: How data is classified statistically (nominal, ordinal, interval, ratio)
- Number of dimensions: Unidimensional, measure only one attribute of
participant/object and multidimensional, measure more attributes of
participant/object.
- Scale construction:
Five construction approaches are used in research practice:
Arbitrary (willekeurig): a scale is custom-designed to measure a property or indicant.
Consensus (overeenstemming): judges evaluate the items to be included, based on
topical relevance and lack of ambiguity.
Item analysis: measurement scales are tested with a sample of participants.
Cumulative: scales are chosen for their conformity to a ranking of items with ascending
a descending discriminating (onderscheidende) power.
Factoring: scales are constructed from inter-correlations of items for other studies.

14.6 Response methods

Rating scales: to judge properties of objects without reference to other similar objects. This may be
in forms such as ‘like-dislike’; approve-indifferent-disapprove etc.

Number of scale points: Some researchers think that the greater the number of scales (5 points
instead of 3 points), the greater the sensitivity of the instrument.

Alternative scales page 416,417 :


- Simple category scale (dichotomous scale) offers two mutually exclusive response choices,
yes and no
- Multiple-choice-single-response-scale: Nominal data
- Multiple-choice-multiple-response-scale: Nominal data, multiple responses possible.
- Likert scale: Interval, for example 1-5 likeliness, is given numerical values.
- Semantic differential scale: Interval, measures the psychological meanings of an attitude
object.
- Numerical scale: ordinal or interval, equal intervals
- Multiple rating list: produces a mental map, is quite similar to numerical scale
- Fixed-sum-scale: categories that must sum up 100% in total
- Stapel-scale: ratings from +5 to -5
- Graphic rating scale: To discern fine differences, with images etc..

Errors to avoid with rating scales:


- Leniency (mildheid)  when the participant is an easy rater, or hard rater.  people
will give a higher score if they know them. Positive leniency and negative leniency.
- Central tendency: Raters are reluctant (onwillig) to give extreme judgements, for
example when the rater does not know the object or property being rated. Following
steps to counteract:
o Adjust the strength of descriptive adjectives
o Space the intermediate descriptive phrases further apart
o Provide smaller differences in meaning between the steps near the ends of the
scale than between the steps near the centre.
o Use more points in the scale.
- Halo effect  the systematic bias that the rater introduces by carrying over a
generalized impression of the subject from one rating to another. Example: you may
expect the student who does well on the first question of an exam to do well on the
second.
o Halo is a pervasive error

Ranking scales  The subject directly compares two or more objects and makes choices
among them

- Paired comparison scale: choosing between two objects, when more than 2, this
becomes difficult for the participant.
- Forced ranking scale: List attributes that are ranked relative to each other (1,2,3 etc.)
- Comparative scale: a scale compared with the standard (if know), this bottle of water
tastes better than the one before (yes, about the same, no)

14.7 measurement scale construction


Five construction approaches are used in research practice:
Arbitrary (willekeurig): a scale is custom-designed to measure a property or indicant.
Consensus (overeenstemming): judges evaluate the items to be included, based on
topical relevance and lack of ambiguity.
Item analysis: measurement scales are tested with a sample of participants.
Cumulative: scales are chosen for their conformity to a ranking of items with ascending
a descending discriminating (onderscheidende) power.
Factoring: scales are constructed from inter-correlations of items for other studies.

Equal appearing interval scale, also known as the Thurstone scale; this approach resulted in an
interval rating scale for attitude measurement. Its costs, time and staff requirements make it
impractical.

Item analysis scaling is a procedure for evaluating an item based on how well it discriminates
between those persons whose total score is high and those whose total score is low. The most
popular scale using this approach is summated or Likert scale.
The item means between the high score group and the low score group are then tested for
significance by calculating t values. Finally, the 20 to 25 items that have the greatest t values
(significant differences between means) are selected for inclusion in the final scale.

The first step is to collect a large number of statements that meet two criteria:
Each statement is believed to be relevant to the attitude being studied.
Each is believed to reflect a favourable or unfavourable position on that attitude.

The next step is to array these total scores an select some portion representing the highest and
lowest total scores: the top 25 per cent and the bottom 25 per cent. The extremes are the two
criterion groups by which we evaluate individual statements. After finding the t values for each
statement, we rank order them and select those statements with the highest t values.
A widely used indicator to test how well different items form one scale is Cronbach’s alpha. Formally
Cronbach’s α is the average correlation between all items corrected for the number of items. It can take
values between -1 and +1 and a general rule of thumb is that α ≥ 0.7 provide a good scale.

Total scores on cumulative scales have the same meaning. Given a person’s total score, it is
possible to estimate which items were answered positively and which negatively. Scalogram analysis
is a procedure for determining whether a set of items forms a unidimensional scale. A scale is
unidimensional if the responses fall into a pattern in which endorsement of the item reflecting the
extreme position also results in endorsing all items that are less extreme. The scalogram and similar
procedures for discovering underlying structure are useful for assessing behaviours that are highly
structured.
Factor scales include a variety of techniques that have been developed to address
two problems:
How to deal with a universe of content that is multidimensional.
How to uncover underlying dimensions that haven’t been identified by exploratory research.
These techniques are designed to inter-correlate items so that their degree of interdependence
(onderlinge afhankelijkheid )may be detected. We limit the discussion in this section to the
semantic differential (SD), which is based on factor analysis.

Three factors contributed most to meaningful judgements by participants:


Evaluation
Potency
Activity

SD scale should be adapted to each research problem. SD construction involves the following steps:
Select the concepts.
Select the original bipolar words pairs or pairs you adapt to your needs.
You need at least three bipolar pairs for each factor to use evaluation.
The scale must be relevant to the concepts being judged.
Scales should be stable across subjects and concepts.
Scales should be linear between polar opposites and pass through the origin.

Consensus and cumulative are time-consuming so less used.

Advanced scaling techniques


Multidimensional scaling (MDS) describes a collection of techniques that deal with property space
in a more general manner than the semantic differential. With MDS, one can scale objects, people
or both, in ways that provide a visual impression of the relationships among variables. The data-
handling characteristics of MDAS provide several options: ordinal input (with interval output), and
fully metric (interval) and non-metric modes. The various techniques use proximities as input data.
A proximity is an index ofperceived similarity or dissimilarity between objects.
See exhibit 14.22, p. 434

Conjoint analysis is used to measure complex decision making that requires multi-attribute
judgements. Its primary focus has been the explanation of consumer behaviour, with numerous
applications in product development and marketing. Conjoint analysis can produce a scaled value
for each attribute as well as utility value for attributes that have levels. Both ranking and rating
inputs may be used to evaluate product attributes.

You might also like