Professional Documents
Culture Documents
There is no point to plan if we have no clue what the future will bring?
Illustrated leaflet
RESEARCH ?
INSTRUMENTS ?
NETWORK of EXCELLENCE
INTEGRATED PROJECT
COORDINATION ACTION
CONCLUSIONS
(1):
founded on S&T excellence, which integrates the existing knowledge with the creation of new one
CONCLUSIONS
(2):
ACHIEVING BREAKTHROUGH
THROUGH INTEGRATION !
INSTRUMENTS ?
Purpose Target audience
R&D institutes Universities
EU-funded activities
Integrating Joint Research Spreading excellence Consort. mgmt
Indicative EU funding
7 million
5-15 million
Average duration
48-60 months
Optimum consortium
6-12 partners
Specific characteristics
NoE
Art of
getting together
mainly indirectly:
Industry/ SMEs
Institutional COMMITMENT at strategic level from the very start & for the whole duration Limited number of partners
RADICAL INNOVATION through INTEGRATION of disciplines, techs, activities PROGRAM approach on multiple issues PROJECT approach on a single issue FRONTIERS of KNOWLEDGE
IP
Art of breakthrough
36-60 months
10-20 partners
STREP
Industry/S MEs R&D institutes Universities R&D institutes Universities Industry/S MEs R&D institutes Universities Industry/S MEs
18-36 months
6-15 partners
CA
Coordination Networking
Meetings seminars workshops info systems Consort. mgmt Foresight scenariobuilding Technology roadmaps
18-36 months
13-26 partners
SSA
9-30 months
1-15 partners
NO funding research
Traditional Instruments:
Coordination actions Specific support actions
New Instruments:
Research Methodology
A science of studying how research is done scientifically A way to systematically solve the research problem by logically adopting various steps Methodology helps to understand not only the productsof scientific inquiry but the processitself Aims to describe and analyze methods, throw light on their limitations and resources, clarify their presuppositions and consequences, relating their potentialities to the twilight zoneat the frontiers of knowledge
Phase 1: essential first steps Phase 2: data collection Phase 3: analysis and interpretation
Phase 1
Clarify the issue to be researched and select research method(s). Essential because a question that is unclear or too broad cannot be answered. The research method allows the research to be conducted according to a plan or design.
Phase 1 cont.
Clarifying the question and method enables the researcher to be clearer about the data that is needed Therefore to make a decision about what sample size, or the amount of data, is needed.
Phase 2
Phase 3
Analysis, interpretation Relating the data to the research question Drawing conclusions Assessing the limitations of the study
Theory Hypothesis Operational Definition Principle of Falsifiability Subjects Selection Factor Replication
Principle of Falsifiability
The principle that a scientific theory must make predictions that are specific enough to expose the theory to the possibility of disconfirmation; that is, the theory must predict not only what will happen, but also what will not happen.
Methods of Observation
The Case-Study Method The Survey Method The Testing Method The NaturalisticObservation Method
Direct Surveys
Interviewer maintains or can maintain a direct communication with the respondent and is able to provide feedback, repeat a question, or ask for additional information
Indirect Surveys
Researchers personal impact is very small because there is no direct communication between the respondent and the interviewer. The questions are typically written and handed in, mailed, or sent electronically to the respondents homes, classrooms, or work places.
A survey method used in academic & marketing research. The most common usage is one where a group responds to specific social, political, or marketing messages. The typical focus group contains 7 10 members who are experts, potential buyers, viewers or other customers.
Psychologists use psychological tests like intelligence, aptitude, and personality, to measure various traits and characteristics among a population
Testing
Standardize: To develop uniform procedures for giving and scoring a test. Norms: Established standards of performance. Reliability: Consistency of scores derived from a test. Validity: The ability of a test to measure what it was designed to measure.
Observations
NaturalisticObservation A scientific method in which organisms are observed in their natural environments
LaboratoryObservation A method where a place is found in which theories, techniques, and methods are tested and demonstrated
A scientific method that studies the relationships between variables Correlation coefficient is a number between +1.00 to -1.00 that expresses the strength and direction of the relationship between two variables
Types of Correlations
Positive correlation: Increases in one variable are associated with increases in the other; decreases are likewise associated Negative correlation: Increases in one variable are associated with decreases in the other
Experimental Variables Experimental and Control Conditions Experimenter Effects Advantages and Limitations of Experiments
Experimental Variables
Independent Variable: A variable that an experimenter manipulates. Dependent Variable: A variable than an experimenter predicts will be affected by manipulations of the independent variable.
Experimental Method
Treatment refers to a condition received by participants so that its effects may be observed
Experimental subjects receive the treatment Control subjects do not receive the experimental treatment but for whom all other conditions are comparable to those of experimental subjects
Placebo refers to a bogus treatment that has the appearance of being genuine Blind refers to unawareness as to whether or not one has received a treatment
Double-blind refers to a study where neither the subjects nor the persons measuring results know who has received the treatment
Experiments allow conclusions about cause-effect relationships. Participants in experiments are not always representative of larger population.
Measurement
The behavioral definition and measurement techniques of the researchers home culture may not transfer easily to another culture thereby leading to an imposed etic (Berry, 1969)
The measure that indicates the location of a score distribution on a variable, that is, describes where most of the distribution is located.
Median= score in a distribution on the 50th percentile Mode= the most frequently occurring score in the distribution
Survey Research
48 48
Descriptive Research
Descriptive research is also known as nonexperimental research Asks the basic question: What is? No manipulation of variables Measure and record events that would happen anyway
49
49
No randomization, therefore less control and many threats to internal validity Cause-and-effect is more difficult to establish
50
50
Survey
Survey technique of descriptive research that seeks to determine present practices or opinions of a specified population Types of survey research include the questionnaire, interview, and normative survey
51
51
Durell, D. L., Pujol, T. J., & Barnes, J. T. (2003). A survey of the scientific data and training methods utilized by collegiate strength and conditioning coaches. J Strength Cond Res, 17(2), 368-373. Schick MG, Brown LE, Coburn JW, Beam WC, Schick EE, Dabbs NC. Physiological profile of mixed martial artists. Medicina Sportiva. 14(4):182187, 2010.
52
Questionnaire
Questionnaire type of paperand-pencil survey used in descriptive research in which information is obtained by asking participants to respond to questions rather than by observing their behavior Limitation is that results are simply what people say they do, believe, like, dislike, etc.
53
53
Questionnaire, contd
54
54
Questionnaire, contd
Delimit the sample What is the specific population you wish to examine?
Adults vs. children Exercisers vs. nonexercisers (how do you define exercisers?) Elite coaches
55
55
Questionnaire, contd
Sampling error
Many samples may be drawn from a population Each sample will yield different results The difference between samples is the sampling error (amount of error to expect in a single sample) Must be adequate to represent population of interest Must be practical from the standpoint of time and cost
56
Sample size
56
Questionnaire, contd
Open-ended questions
Category of question in questionnaires and interviews that allows the respondent considerable latitude to express feelings and to expand on ideas Example: How do you think things went today? Drawbacks:
Respondents dont like them They are time-consuming to answer Limited control over the types of answers given May be more difficult to analyze
57
57
Questionnaire, contd
Closed questions
Category of question found in questionnaires or interviews that requires a specific response and that often takes the form of rankings, scaled items, or categorical responses Ranking type of closed question that forces the respondent to place responses in a rank order according to some criterion
58
58
Questionnaire, contd
Example of Ranking: From what sources has most of your nutrition information come? Rank top 3
59
59
Scaled items type of closed question that requires participants to indicate the strength of their agreement or disagreement with some statement or the relative frequency of some behavior Example of scaled item: In a required physical education program, students should be required to take at least one dance class.
1. 2. 3. 4. 5.
60
60
Questionnaire, contd
Likert-type scale consists of 3 to 9 items Equal intervals between responses, i.e., difference between strongly agree, and agree is considered equivalent
61
61
Questionnaire, contd
Categorical response type of closed question that offers the participant only two responses, such as yes or no Possible responses include yes/no, true/false, female/male, etc.
62
62
Questionnaire, cont
63
63
Questionnaire, contd
64
64
Questionnaire, contd
65
65
Questionnaire, contd
http://www.surveymonkey.com/?cmpid=us:ps:google&gclid=CPbu_7WmqACFQVaagodQSSdTw
66
66
Questionnaire, contd
Follow-up
Wait at least 10 days for follow-up Wait another 10 days then send another questionnaire Keep in mind that respondents are self-selected and this biases your results
67
67
Questionnaire, contd
68
68
Delphi survey method survey technique that uses a series of questionnaires in such a way that the respondents (usually experts) reach a consensus about the subject
Survey is sent to respondents (experts) Results are sent to respondents and they are asked to reconsider their answers
69
69
Personal Interview
Essentially the same as the questionnaire except questioning is done orally instead of in writing Higher response rate but smaller samples than questionnaire
70
70
71
71
Normative Survey
Normative survey survey method that involves establishing norms for abilities, performances, beliefs, and attitudes Similar to questionnaire except that tests are administered AAHPERD Youth Fitness Test (1958) National Children and Youth
72
Next Class
73
Pure Research because its there contribute to abstract, theoretical understanding Applied Research I have a hammer, so go and find me a nail Instrumentalist Research
Quantitative
Ratio a natural zero Cardinal / Interval no natural zero Ranked Ordinal sequence (numbers)
Qualitative
Category Ordinal sequence (text) Nominal differentiation Dichotomous it is or it isnt
Likert Scales
A contrived Ranked Ordinal Scale much-used in attitudinal research Usually 5, 7 or 9 choices Usually anchored by end-points such as Strongly Disagree ... Strongly Agree and with a neutral-sounding midpoint Usually very long lists of questions The data is generally processed
Passive (Observation) or Active (Response Elicitation) Purpose: Disguised or Openly Declared Structured, Semi-Structured or Unstructured In Person, By Telephone, By Written Form (e.g. Mail), By Email/Web-Form, By Mechanical Means
Unit of Analysis
A Person An Event An Object A Body of Individuals Group, Organisational Unit, Organisation A Relationship, e.g. a Dyad An Aggregate Census District, Industry Segment or
One or more Variables (the Cause) are asserted to determine another Variable (the Effect) Determinant Cause That factor which is the necessary and sufficient condition for some subsequent effect Probabilistic Causes Those factors that are necessary but individually not sufficient conditions for
Evidence of Causality
Co-variation / Correlation Time Order, and Chaining Absence of other variables that might be the real cause Plausibility / Systemic Relationship
Test Case
I am a manufacturer of raincoats. I want to increase sales. I increase my advertising budget by 100%. Sales go up 20%. What is the relationship between the increase in the advertising budget and the increase in sales?
Introduction
1.
2. 3. 4.
5.
86
1. Descriptive Research
When a study is designed primarily to describe what is going on or what exists: Describe characteristics of relevant groups (consumers, salespeople, etc) Public opinion polls that seek only to describe the proportion of people who hold various opinions are primarily descriptive in nature. Determine the perception of product characteristics Make specific predictions (for example, retail sales)
87
2. Relational Research
When a study is designed to look at the relationships between two or more variables. A public opinion poll that compares what proportion of males and females say they would vote for a Democratic or a Republican candidate in the next presidential election is essentially studying the relationship between gender and voting preference. Study of the relation of shopping to eating out.
88
3. Causal Research
When a study is designed to determine whether one or more variables (e.g., a program or treatment variable) causes or affects one or more outcome variables (cause-effect relationships). Examples: study the effect of an advertising campaign on product sales. Study the effect of presence and helpfulness of salespeople on sales of housewares. Causal studies are probably the most demanding of all (the main method for causal research is experimentation).
89
Variables
A variable is any entity that can take on different values. An attribute is a specific value on a variable Gender has two attributes: male and female. the variable agreement might be defined as having five attributes: 1 = strongly disagree 2 = disagree 3 = neutral 4 = agree 5 = strongly agree The attributes of a variable should be exhaustive and mutually exclusive.
90
Types of variables
Qualitative: described in words (color, gender, etc) Quantitative: described in numbers (age, salary, etc) Quantitative variables are divided into two types: Discrete: whole numbers, countable (number of employees) Continuous: can take fractional values; can take any value within a certain interval)
91
Define secondary and primary data Describe primary data collection methods Describe sampling techniques Identify advantages disadvantages of different data gathering techniques Construct a survey
Secondary Data
Data gathered by another source (e.g. research study, survey, interview) Secondary data is gathered BEFORE primary data. WHY? Because you want to find out what is already known about a subject before you dive into your own investigation. WHY? Because some of your questions can possibly have been already answered by other investigators or authors. Why reinvent the wheel?
Primary Data
Data never gathered before Advantage: find data you need to suit your purpose Disadvantage: usually more costly and time consuming than collecting secondary data Collected after secondary data is collected
Sampling Techniques
Population - total group of respondents that the researcher wants to study. Populations are too costly and time consuming to study in entirety. Sample - selecting and surveying respondents (research participants) from the population.
Sampling Techniques
A probability sample is one that gives every member of the population a known chance of being selected.
simple random sample - anyone stratified sample - different groups (ages) cluster sample - different areas (cities)
Sampling Techniques
A non-probability sample is an arbitrary grouping that limits the use of some statistical tests. It is not selected randomly. convenience sample - readily available quota sample - maintain representation
Focus Groups bring together respondents with common characteristics Observation - actually view respondents Experiment - controlled variables and respondent groups. Non-personal survey on site, telephone, mail, fax, computer, panel Personal interview - one-on-one survey with respondents Company records internal document survey research
either/or responses (true/false; yes/no; for/against multiple choice select one or more than one scaled response gather range of values (strongly disagree, somewhat disagree, neutral, somewhat agree, strongly agree
Plan a user-friendly format Gather demographic data age, gender, etc., when necessary. Guarantee anonymity Ensure ease of tabulation Scantron forms Ask well-phrased and unambiguous questions that can be answered Develop for completeness get all the data Pilot test the instrument
Assignment
Prepare a questionnaire (word processed on paper) that you utilized to gather the primary data for the formal report case. The questionnaire will be inserted as an appendix at the end of your formal report. Remember, you should include on the survey: an introductory statement about the purpose of the survey, a motivational reason why customers should take it, and a reminder to participants that they are taking the survey anonymously. You should NOT include on the survey: answer percentages
The purpose of this survey is to help management identify family issues that our employees experience that are related to job performance. Please respond to the following survey items by checking the appropriate response next to each question/item. Your responses are completely anonymous and will only be used to assess overall employee characteristics.
ensure anonymity
Chapter 3 Research design and methodology Introduction: What does Chapter 3 consist of? Research methodology Data collection (steps you took, methodology) Data analysis Conclusions based on Chapter 3
Research Methodology
Includes the following: 1. Research Design 2. Research Environment 3. Research Materials 4. Research Method/Data Gathering Procedure 5. Statistical Tools
Research Design
Includes the following: 1. identification of the variables and their categories 2. the sampling scheme 3. the treatment grouping
Research Environment
Research Materials
Includes a list of the different materials used in the investigation Materials used in the study like glasswares, equipment and chemicals are mentioned as the details of the procedure are given but they are NOT enumerated or listed individually as in a laboratory manual
Presents in chronological order the general procedure of the conduct of the study Provides all the needed details especially if new to allow others to use your methodology Describes in detail what will be done, as well as when, where, and how
Discusses the different methods in sampling, experimenting, data gathering and statistical testing and analysis Presents the conditions that would enable the researchers to observe the differences on the dependent variables which are actually the results of the manipulation of the independent
Statistical Tools
Discusses the appropriateness of the statistical tools employed in treating the data gathered for analysis Describe how each statistical test, whether descriptive or inferential, is used in the study, the level of significance is also established
Guide Questions for the Checking the Different Parts of the Research Proposal 1. Have I identified the specific research problem I wish to investigate? 2. Have I indicated what I intended to do about this problem? 3. Have I put forth an argument as to why this problem is worthy of investigation? 4. Have I asked the specific
6. Do I intend to investigate a relationship? If so, have I indicated the variables I think may be related? 7. Have I identified all key terms clearly, and operationally? 8. Have I surveyed and described relevant studies related to the problem? 9. Have surveyed the existing
11. Have I describe my sampling plan? 12. Have I describe the relevant characteristics of my sample in details? 13. Have I identified the population to which the results of the study may legitimately be generalized? 14. Have I described the instrument/s to be used? 15. Have I indicated their relevance to the present study? 16. Have I stated how will I check the reliability of data obtained from all instrument? 17. Have I stated how will I check the validity of data obtained from all instruments?
18. Have I fully described the procedures to be followed in the study what will be done, where, when, and how? 19. Have I discussed any feasible alternative explanation that might exist for results of the study? 20. Have I discussed how will I control for these alternative explanations? 21. Have I described how will I organize the data I will collect? 22. Have I described how will I analyze the data, including statistical procedures, and why these procedures are appropriate?
Outline
1.
2.
3.
research strategy
Descriptive strategy
the goal is to describe the state of affairs at the time of the study measures variables as they exist naturally e.g. 19% of eligible voters participated in the election
measures two variables, usually as they exist naturally the goal of this strategy is to describe a relationship between the two variables without attempting to explain the cause of the relationship e.g. Are students GPAs related to their parents income? Answers questions about the relationship between two variables by demonstrating a difference between two groups or two threatment conditions E.g. verbal scores of 6-years old boy and 6-years old girls
Correlational strategy
Nonexperimental strategy
Experimental strategy
the researcher manipulates one variable (called independent variable) while observing or measuring a second variable (dependent variable) this is the true experiment because independent variable is manipulated by the researcher (e.g. room temperature) the goal of experimental strategy is to determine whether a causal relationship exists between two variables
Quasi-experimental strategy
uses a nonmanipulated variable to define groups or conditions (e.g. time or age) or pre and post threatment controls other variables as much as possible the goal is to obtain evidence in support of a cause-and-effect relationship however, a quasi-experimental strategy can not unambiguously
Research strategy
refers to the general approach and goals of the study general plan for implementing a research strategy (e.g. group versus individual, same individuals vs. different individuals, number of variables included)
an exact, step-by-step description of a specific research study (exact
Research design
Research procedure
CHAPTER 10
MEASUREMENT IN MARKETING RESEARCH
Basic types question-response format. Consideration of choosing a question response-format. Measurement and scale characteristics in a questionresponse format. Levels of measurement of scales. Various types scaled-response question formats. Reliability and validity of measurements.
Questionnaire
Data Analysis
Findings
Recommendations
Managerial Action
Editing
Refers to going through the questionnaire to make certain the skip patterns are followed and required questions are filled out. A skip pattern is the sequence in which questions are asked. An open-ended question is one that does not contain prerecorded possible responses:
Un-probed format:
Seeks no additional information from respondents. Researcher may ask comments or statement from the respondents. Researcher may ask additional information.
Probed format:
Response format:
Yes/No options.
Scaled-response Questions:
Purely numerical or only endpoints are identified. All of the scaled position are identified.
Objects:
Properties:
Objective properties:
Subjective properties:
Scale Characteristics
Description:
Agree/Disagree, Approve/Disapprove
Size of the descriptor . Two cars Vs. one car family.
Order:
Distance:
Origin:
0 or 1.
Finish
7 8 3
Ordinal
Finish Third place 8.2 Second place 9.1 First place 9.6
Interval
Ratio
15.2
14.1
13.4
Scale
Marketing Permissible Statistics Inferential Examples Descriptive Brand nos., Percentages, Chi-square, store types mode binomial test
Preference rankings, market position, social class Attitudes, opinions, index nos.
Percentile, median
Range, mean, Productstandard moment deviation correlation, t tests, regression Geometric mean, harmonic mean Coefficient of variation
Ratio
Length, weight
Likert
Semantic Differential
Stapel
Attitude Scales
Scaling Defined:
The term scaling refers to procedures for attempting to determine quantitative measures of subjective and sometimes abstract concepts. It is defined as a procedure for the assignment of numbers to a property of objects in order to impart some of the characteristics of numbers to the
Itemized rating scales are very similar to graphic rating scales, except that respondents must select from a limited number of ordered categories rather than placing a check mark on a continuous scale.
Itemized and graphic scales are noncomparative because the respondent
Rank-Order Scale:
Q-Sorting:
Q-Sorting is basically a sophisticated form of rank ordering. A set of objects verbal statements, slogans, product features, potential customer services, and so forth - is given to an individual to sort into piles according to specific rating categories.
Paired Comparison:
Paired comparison scales ask a respondent to pick one of two objects from a set based upon some stated criteria.
The construction of the semantic differential scale begins with the determination of a concept to be rated. The researcher selects dichotomous pairs of words or phrases that could be used to describe the concept. Respondents then rate the concept on a scale. The mean of these responses for each pair of adjectives is computed and plotted as a profile or image. The Stapel scale is a modification of the semantic differential. A single adjective is placed at the center of the scale. Typically it is designed as a 10-point scale ranging from +5 to -5. The technique is designed to measure both the direction and intensity of attitudes simultaneously.
Stapel Scale:
Likert Scale:
The Likert scale consists of a series of statements that express either a favorable or an unfavorable attitude toward the concept under study. Scale designed to measure the likelihood that a potential customer will purchase a product or service.
Direct Questioning
Indirect Questioning
Observation
Instructions: We are going to present you with ten pairs of shampoo brands.
For each pair, please indicate which one of the two brands of shampoo you would
prefer for personal use.
Jhirmack Jhirmack Finesse Vidal Sassoon Head & Shoulders Pert Number of Times Preferredb
Pert 0 0 1 0 1
aA 1 in a particular box means that the brand in that column was preferred over the brand in the corresponding row. A 0 means that the row brand was preferred over the column brand. bThe number of times a brand was preferred is obtained by summing the 1s in each column.
Brand 1. Crest
2. Colgate
3. Aim 4. Gleem 5. Macleans
_________
_________ _________ _________
Instructions
On the next slide are eight attributes of bathing soaps. Please allocate 100 points among the attributes so that your allocation reflects the relative importance you attach to each attribute. The more points an attribute receives, the more important the attribute is. If an attribute is not at all important, assign it zero points. If an attribute is twice as important as some other attribute, it should receive twice as many points.
8 2 3 53 9 7 5 13 100
2 4 9 17 0 5 3 60 100
4 17 7 9 19 9 20 15 100
Examples
Advantages
Disadvantages
Scoring can be cumbersome unless computerized
Seven-point scale Brand, product, Versatile Semantic Differential with bipolar labels and company images
Stapel Scale Unipolar ten-point scale, -5 to +5, without a neutral point (zero)
Measurement Easy to construct, Confusing and of attitudes and administer over difficult to apply images telephone
A Semantic Differential Scale for Measuring Self- Concepts, Person Concepts, 1) Rugged :---:---:---:---:---:---:---: Delicate and Product Concepts
2) Excitable :---:---:---:---:---:---:---: Calm 3) Uncomfortable :---:---:---:---:---:---:---: Comfortable 4) Dominating :---:---:---:---:---:---:---: Submissive
Very good good Good Good Bad Somewhat good Very bad Bad Extremely bad Very bad
A variety of scale configurations may be employed to measure the gentleness of Cheer detergent. Some examples include: Cheer detergent is: 1) Very harsh ------------Very gentle 2) Very harsh gentle 1 2 3 4 5 6 Cheer 7
3) . Very harsh . . Neither harsh nor gentle . . Very gentle 4) ____ ____ ____ ____ ____ ____ -3 -2 -1 +1 Very Somewhat 0Neither harsh Very harsh Harsh harsh nor gentle
____
+2 Somewhat +3 Gentle
gentle
Thermometer Scale
Instructions:
Please indicate how much you like McDonalds hamburgers by coloring in the thermometer. Start at the bottom and color up to the temperature level that best indicates how strong your preference is.
Like 100 75 very 50 much 25 Dislike 0 very Smiling Face Scale Instructions: much Please point to the face that shows how much you like the Barbie Doll. If you do not like the Barbie Doll at all, you would point to Face 1. If you liked it very much, you would point to Face 5.
Form: Form:
Summary of Itemized Scale Decisionsno single, optimal 1) Number of Categories Although there is
Table 9.2
number, traditional guidelines suggest that there should be between five and nine categories 2) Balanced vs. unbalanced In general, the scale should be balanced to obtain objective data 3) Odd/ even no. of categories If a neutral or indifferent scale response is possible from at least some of the respondents, an odd number of categories should be used 4) Forced vs. non-forced In situations where the respondents are expected to have no opinion, the accuracy of the data may be improved by a non-forced scale 5) Verbal description An argument can be made for labeling all or many scale categories. The category descriptions should be located as close to the response categories as possible 6) Physical form A number of options should be tried and
Reliability of Measurements
Scale Evaluation Reliability Validity Generalizabilit y
Internal Consistenc y
1) Other relatively stable characteristics of the individual that influence the test score, such as intelligence, social desirability, and education. 2) Short-term or transient personal factors, such as health, emotions, fatigue.
3) Situational factors, such as the presence of other people, noise, and distractions.
4) Sampling of items included in the scale: addition, deletion, or changes in the scale items.
5) Lack of clarity of the scale, including the instructions or the items themselves.
6) Mechanical factors, such as poor printing, overcrowding items in the questionnaire, and poor design.
Descriptive research describes the present status of people, attitudes, and progress.
Sampling Techniques
Population
A population is defined as all members that are described by the characteristics selected by the experimenter.
All students at SJSU. All women students at SJSU. All Kinesiology majors. All MA sport management students. All students in KIN 250.
Sample
Simple random sample Systematic sample Stratified sample Cluster sample Proportional sample
Systematic Sample
Stratified Sample
A stratified sample assures a random sample, however the sample has equal numbers within a particular characteristic.
Cluster Sample
A sample is chosen because it is difficult to sample the entire population, e.g., choosing all members of a particular class rather than individuals. A cluster sample is often easier and less costly, but generalizability is limited because of an N of 1.
Proportional Sample
Proportion out groups that you might want in your sample. The proportions should be logically based in the literature.
Clearly identify the survey purpose Outline the field of study Avoid overlapping questions Order questions in a logical format
Simple to complex
Eliminate ambiguities Eliminate all grammatical errors
The construction of the measuring devices is perhaps the most important segment of any study. Many well-conceived research studies have never seen the light of day because of flawed measures.
Schoenfeldt, 1984
The point is not that adequate measurement is nice. It is necessary, crucial, etc. Without it we have nothing. Korman, 1974, p. 194
Validation is an unending process.Most psychological measures need to be constantly evaluated and reevaluated to see if they are behaving as they should. Nunnally & Bernstein, 1994, p. 84
173
Validity in Research
174
Construct validity is present when there is a high correspondence between the scores obtained on a measure and the mental definition of a construct it is designed to represent. Internal validity is present when variation in scores on a measure of an independent variable is responsible for variations in scores on a measure of a dependent variable. External validity is present when generalizations of findings obtained in a research study, other than statistical generalization, are made appropriately.
Construct Validation
Involves procedures researchers use to develop measures and to make inferences about a measures construct validity It is a continual process No one method alone will give confidence in the construct validity of your measure
175
Perform logical analyses and empirical tests to determine if observations obtained on the measure conform to the conceptual definition
176
Why is it important? How to do it? What are some of the best practices?
177
Instrumentation in Perspective
178
Instruments are devices with their own advantages and disadvantages, some more precise than others, and
Survey Instruments
Survey instrumentation
Most widely used across disciplines Most abused technique---people
179
Why do we do surveys?
To describe the populations: What is going on? Theoretical reasons: Why is it going on?
Develop and test theory Theory should always guide survey development and data collection
180
4. 5. 6. 7. 8.
181
Have a job which leaves you sufficient time for your personal or family life. (.86) Have training opportunities (to improve your skills or learn new skills). (-.82) Have good physical working conditions (good ventilation and lighting, adequate work space, etc.). (-.69) Fully use your skills and abilities on the job. (-.63) Have considerable freedom to adapt your own approach to the job. (.49) Have challenging work to do---work from which you can get a personal sense of accomplishment. (.46) Work with people who cooperate well with one another.(.20) Have a good working relationship with your manager.(.20) Adapted from Heine et al.
(2002)
I would rather say no directly, than risk being misunderstood. (12) Speaking up during a class is not a problem for me. (14) Having a lively imagination is important to me. (12) I am comfortable with being singled out for praise or rewards. (13) I am the same person at home that I am at school. (13) Being able to take care of myself is a primary concern for me. (12) I act the same way no mater who I am with. (13) I prefer to be direct and forthright when dealing with people I have just met. (14) I enjoy being unique and different from others in many respects. (13) My personal identity, independent of others, is very important to Adapted from Heine et al. me. (14) (2002) I value being in good health above everything. (8)
183
Construct Definition
Personal computer satisfaction is an emotional response resulting from an evaluation of the speed, durability, and initial price, but not the appearance of a personal computer. This evaluation is expected to depend on variation in the actual characteristics of the computer (e.g., speed) and on the expectations a participant has about those characteristics. When characteristics meet or exceed expectations, the evaluation is expected to be positive (satisfaction). When characteristics do not come up to expectations, the evaluation is expected to be negative (dissatisfaction).
From Schwab (1999)
184
Decide how satisfied or dissatisfied you are with each characteristic of your personal computer using the scale below. Circle the number that best describes your feelings for each statement.
Very Dissatisfied 1 Dissatisfied 2 Neither Satisfied nor Dissatisfied 3 Satisfied 4 Very Satisfied 5
My satisfaction with:
1. 2. 3. 4. 5.
185
Initial price of the computer What I paid for the computer How quickly the computer performs calculations How fast the computer runs programs Helpfulness of the salesperson How I was treated when I bought the computer
1 1 1 1 1 1
2 2 2 2 2 2
3 3 3 3 3 3
4 4 4 4 4 4
5 5 5 5 5 5
6.
Construct Variance
Systematic Variance
186
Deficiency
Reliable Contamination
Unreliability
Step 5: Convergent/Discriminant
Validity
187
Advantages: through adequate construct definitions, items should capture the domain of interest, thus to assure content validity in the final scale Disadvantages: requires the researchers to possess working knowledge of the phenomena; may not From be appropriate for exploratoryHinkin (1998) studies
189
Appropriate when the conceptual basis may not result in easily identifiable dimensions for which items can then be generated Frequently researchers develop scales inductively by asking a sample of respondents to provide descriptions of their feelings about their organizations or to describe some aspects of behavior Responses classified into a number of categories by content analysis based on key words or themes or using a sorting process
190
191
192
As simple and short as possible Language should be familiar to target audience Keep items consistent in terms of perspectives (e.g., assess behaviors vs. affective response) Item should address one single issue (no double-barreled items) Leading questions should be avoided
I would never drink and drive for fear of that I might be stopped by the police (yes or no) I am always furious (yes or no) I often lose my temper (never to always)
193
194
- 1
ne
is the number of Subject Matter Experts (SMEs) rating the selection tool or skills being assessed is essential to the job, i.e., having good coverage of the KSAs required for the job. is the total number of experts
CVR = 1 when all judges believe the tool/item is essential; CVR = -1 when none of the judge believes the tool/skill is essential; CVR = 0 means only half of the judges believe that the tool/item is essential.
195
4 - 6 items for most constructs. For initial item generation, twice as many items should be generated
196
Item Scaling
Scale used should generate sufficient variance among respondents for subsequent statistical analyses Likert-type scales are the most frequently used in survey questionnaire. Likert developed the scale to be composed of five equal appearing intervals with a neutral midpoint Coefficient alpha reliability with Likert scales has been shown to increase up to the use of five points, but then it levels off
197
Sample size: Recommendations for item-to-response ratios range from 1:4 to 1:10 for each set of scales to be factor analyzed
198
e.g., if 30 items were retained to develop three measures, a sample size of 150 observations should be sufficient in exploratory factor analyses. For confirmatory factor analysis, a minimum sample size
199
Interitem correlations of the variables to be conducted first. Corrected item-total correlations smaller than 0.4 can be eliminated Exploratory factor analysis. An appropriate loadings greater than 0.40 and /or a loading twice as strong on an appropriate factor than on any other factor. Eigenvalues of greater than 1 and a scree test of the percentage of variance explained should also be examined Be aware of construct deficiency problems in deleting items
Reliability is the accuracy or precision of a measuring instrument and is a necessary condition for validity Use Cronbachs alpha to measure internal consistency. 0.70 should be served as minimum for newly developed measures.
200
Coefficient alpha
The average of all possible split halve reliabilities.
n ( n 1
2 t
i 1 2 t
2 i
n is the number of items for each applicant t is the total of all items for an applicant 2 is the variance across all applicants
201
An example of coefficient
Subject A B C 6 5 4 6 4 5 5 3 3 4 4 4 4 5 4 25 21 20
Item 1 2 3 4 5 Total
5 i 1
2 2
2 i
202
n ( n 1
i2
2 t
i 1 2 t
203
In exploratory research where hypothesized measures are developed for new constructs, the Alphas need to exceed .70 In basic research where you use well-established instruments for constructs, the Alphas need to exceed .80. In applied research where you need to make decisions based
Items that load clearly in an exploratory factor analysis may demonstrate a lack of fit in a multipleindicator measurement model due to lack of external consistency It is recommended that a Confirmatory Factor Analysis be conducted using the item variancecovariance matrix computed from data collected from an independent sample. Then assess the goodness of fit index, t-value, and chi square
204
Convergent validitywhen there is a high correspondence between scores from two or more different measures of the same construct. Discriminat validity---when scores from measures of different constructs do not converge.
Nomological networks---relationships between a construct under measurement consideration and other constructs.
Criterion-related validity
205
Convergent Validity
From Schwab (1999)
Construct
Measure A
Measure B
206
Step 6: Replication
Find an independent sample to collect more data using the measure. The replication should include confirmatory factor analysis, assessment of internal consistency, and convergent, discriminant, and criterionrelated validity assessment
207
208
Heterotraitmonomethod Monotraitmonomethod
Monotraitheteromethod Heterotraitheteromethod
Note: SE: self esteem; SD: self disclosure; LC: Locus of control
209 Adapted from http://www.socialresearchmethods.net/kb/mtmmmat.htm
Interpreting MTMM
Monotrait-heteromethod (convergent
210
Open-end survey to 148 MBA to list 152 individuals efforts, and collected 445 statements. Reduce the list to 180 by eliminating redundant and ambiguous ones, and sort the statements into 19 groups based on similarity. Write a general statement to reflect each group, compare the content of the statements with the construct, and result in 10 prototypical activities to reflect the construct. Pretest it to 20 MBA students, to check for clarity and suggestion for wording improvements Pretest the measure with a sample of 152 working MBAs, to assess internal consistency of the items and check whether the 10 specific behaviors were extra-role activities. 77% checked six or more.
211
212
2.
3.
4.
213
Try to institute new methods that are more effective Try to introduce new structure, technologies, or approach to improve efficiency Try to change how his/her job is executed in order to more effective Try to bring about improved procedures for the work unit or
Taking charge
214
Distributive Justice
Procedural Justice
Interactive Justice
Informational Justice
215
Distributive Justice
Does your outcome reflect the effort you have input into your work (Leventhal, 1976)
Have you been able to express your views and feelings during those procedures (Thibaut & Walker, 1975)
Procedural justice
Interactive justice
Has he/she treated you in a polite manner (Bies & Moag, 1986)
Has he/she communicated details in timely manner (Shapiro, et al., 1994)
216
Informational justice
Distributive Justice
Outcome Satisfaction
Procedural Justice
Interactive Justice Informational
217
Rule Compliance
Leader Evaluation Collective Selfesteem
Justice
218
Etic Orientation
Emic Orientation
Translation
Adaptation
Decontextualization
Contextualization
219
Major Strengths
Major Limitations
Translation approach
Target construct is equivalent across cultures in terms of overall definition, content domain, and empirical representations of the content domain Availability of high quality culturally unbiased Western scales for target construct
Low developmental time and costs Preserve the possibility of a high level of equivalence Allow for direct crosscultural comparison of research findings
Difficulty in achieving semantic equivalence between the Chinese and Western scales Culturally unbiased Western scales are hard to come by
Adaptation approach
Target construct is equivalent between cultures in terms of overall definition and content domain Availability of high quality Western scales for target construct
Low to moderate developmental time and costs Ease of scholarly exchanges of research findings with the Western literature
Difficulty in conducting cross-cultural research Drastic adaptation may create new scale that requires extensive validation in the Chinese 220 context
Decontextualizatio n approach
Target construct is etic or universal or culturally invariant High quality scale for the target construct is unavailable in the literature
Opportunity to develop universal measure for target construct Ease of scholarly exchange of research findings with the Western literature
Long developmental time and high developmental costs Items tend to be phrased at a more abstract level, which may limit its informational and practical value
Contextualizatio n approach
Target construct is emic or culture specific High quality emic scale for target construct unavailable in the literature
Opportunity to develop scales highly relevant for the Chinese context Opportunity to contribute context-specific knowledge to Chinese management
Long developmental time and high developmental costs Limited generalizability of the new scale Hard to communicate research findings with the Western literature
221
Should you use well-established scales from the (western) literature or develop local scales?
Align your measure with your theoretical orientation When you take an etic (universal or cultural invariant) perspective to a research topic, you assume that the Chinese context is largely irrelevant. Here your study is based on general theories, and you should use wellestablished measures in the literature. When you take an emic (cultural specific) perspective to a research topic, you assume that the phenomenon is Chinese context specific. Here your study is based on context embedded theories, and you should consider using measures appropriate for the Chinese context. When you do cross-cultural research, you try to study phenomena common across societies. You model culture explicitly in your theories (either as a main or a moderating effect) and should apply measures that work in multiple cultural contexts.
222
Key issues
high
low sampling/method classification/pan el test Empirical/ conceptual Creativity & insight content validation
Collect behavioral incidents
Domain definition
224
Content validation
Write items based on your construct definition (be aware of contamination & deficiency!!!) Be sure to review items of extant scales Incident descriptions may not make good survey items (may be too specific, too ambiguous) Schriesheim et al. (1990) method is useful for multidimensional constructs
226
227
Study the literature & the phenomenon to come up with a broad definition of the construct Collect good behavioral incidents (quantity & quality) Build a sound classification system Conduct panel test to verify your results Use inductive and deductive
228
Good survey measures must be grounded on sound theory and conceptual definitions Developing good survey measures takes much time, resources, experiences, and commitment, but the payoff can be immense!! Avoid convenience measurement at all time!!!
Instruments
tools researchers use to collect data for research studies (alternatively called tests)
1. Cognitive instruments...
Measure an individuals attainment in academic areas typically used to diagnose strengths and weaknesses
achievement tests
provide information about how well the test takers have learned what they have been taught in school achievement is determined by comparing it to the norm, the performance of a national group of similar students who have taken the same test
aptitude tests
measure the intellect and abilities not normally taught and often are used to predict future performance typically provide an overall score, a verbal score, and a quantitative score
2. Affective instruments...
Measure characteristics of individuals along a number of dimensions and to assess feelings, values, and attitudes toward self, others, and a variety of other activities, institutions, and situations
attitude scales
self-reports of an individuals beliefs, perceptions, or feelings about self, others, and a variety of activities, institutions, and situations frequently use Likert, semantic differential, Thurstone , or Guttman scales
values tests
measure the relative strength of an individuals valuing of theoretical, economic, aesthetic, social, political, and religious values
personality inventories
an individuals self-report measuring how behaviors characteristic of defined personality traits describe that individual
3. Projective instruments...
associational tests participants react to a stimulus such as a picture, inkblot or word onto which they project a description
Selecting an instrument...
1. determine precisely the type of instrument needed 2. identify and locate appropriate instruments 3. compare and analyze instruments 4. select best instrument
Instrument sources
Burros Mental Measurements Yearbook Tests in Print PRO-ED Publications Test Critiques Compendium ETS Test Collection Database ERIC/AE Test Review Locator ERIC/Burros Test Publisher Directory
Types of validity...
1. Content validity 2. Criterion-related validity 3. Construct validity
1. Content validity: the degree to which an instrument measures an intended content area
forms of content validity sampling validity: does the instrument reflect the total content area? item validity: are the items included on the instrument relevant to the measurement of the intended content area?
2. Criterion-related validity: an individual takes two forms of an instrument which are then correlated to discriminate between those individuals who possess a certain characteristic from those who do not
forms of criterion-related validity concurrent validity: the degree to which scores on one test correlate to scores on another test when both tests are administered in the same time frame predictive validity: the degree to which a test can predict how well individual will do in a future situation
3. Construct validity: a series of studies validate that the instrument really measures what it purports to measure
Types of reliability...
1. Stability 2. Equivalence 3. Internal consistency
1. Stability (test-retest): the degree to which two scores on the same instrument are consistent over time
2. Equivalence (equivalent forms): the degree to which identical instruments (except for the actual items included) yield identical scores
3. Internal consistency (split-half reliability with Spearman-Brown correction formula , KuderRichardson and Cronbacks Alpha reliabilities, scorer/rater reliability): the degree to which one instrument yields consistent results
Data the pieces of information researchers collect through instruments to examine a topic or hypothesis
Constructs abstractions of behavioral factors that cannot be observed directly and which researchers invent to explain behavior
Measurement scales...
Qualitative (categorical) 1. nominal variables Quantitative (continuous) 2. ordinal variables 3. interval variables 4. ratio variables
2. ordinal (order): classifies persons or objects and ranks them in terms of the degree to which those persons or objects possess a characteristic of interest
3. interval: ranks, orders, and classifies persons or objects according to equal differences with no true zero point
4. ratio: ranks, orders, classifies persons or objects according to equal differences with a true zero point
Norm reference provides an indication about how one individual performed on an instrument compared to the other students performing on the same instrument
Self reference involves measuring how an individuals performance changes over time
Standard error of measurement an estimate of how often a researcher can expect errors of a given size on an instrument
Types of Research
Descriptive - attempts to describe and explain conditions of the present. It relies on qualitative and quantitative data gathered from written documents, personal interviews, test results, surveys, etc. Often people will call this type of research Survey Research
Descriptive Research
Because of its flexibility and the fact that it deals with current topics, descriptive research is probably the most popular form of research in education today. It is also popular because data can be collected from a wide variety of sources.
Descriptive Research
It provides a descriptive analysis of a given population or sample. Any inferences are left to the readers. Qualitative, quantitative or a combination of both types of data can be presented. Hypotheses or broad research questions are used .
Descriptive Research
Data Sources
Persons such as teachers, students, parents, administrators, etc. Documents such as policy statements, curricular guidelines. Records such as student transcripts.
Descriptive Research
Research Tools
Structured interviews. Structured questionnaires and surveys Standardized tests.
What are the characteristics of agricultural education students? What is the level of job satisfaction of extension agents? Why do teachers leave teaching?
278
Sampling Terminology
In most applied social research, we are interested in generalizing to specific groups. The group you wish to generalize to is often called the population in your study. This is the group you would like to sample from because this is the group you are interested in generalizing to. Let's imagine that you wish to generalize to urban homeless males between the ages of 30 and 50 in the United States. If that is the population of interest, you are likely to have a very hard time developing a reasonable sampling plan.
279
You are probably not going to find an accurate listing of this population, and even if you did, you would almost certainly not be able to mount a national sample across hundreds of urban areas. So we probably should make a distinction between the population you would like to generalize to (the theoretical population ), and the population that will be accessible to you (the accessible population). In this example, the accessible population might be homeless males between the ages of 30 and 50 in six selected urban areas across the U.S.
280
Once you've identified the theoretical and accessible populations, you have to do one more thing before you can actually draw a sample. You have to get a list of the members of the accessible population.
The listing of the accessible population from which you'll draw your sample is called the sampling frame. If you were doing a phone survey and selecting names from the telephone book, the book would be your sampling frame.
281
282
Finally, you actually draw your sample (using one of the many sampling procedures). The sample is the group of people who you select to be in your study. The sample is not necessarily the group of people who are actually in your study. You may not be able to contact or recruit all of the people you actually sample, or some could drop out over the course of the study. The group that actually completes your study is a subsample of the sample -- it doesn't include nonrespondents or dropouts.
283
Sampling is a difficult multi-step process and that there are lots of places you can go wrong. In fact, as we move from each step to the next in identifying a sample, there is the possibility of introducing systematic error or bias. For instance, even if you are able to identify perfectly the population of interest, you may not have access to all of them. And even if you do, you may not have a complete and accurate enumeration or sampling frame from which to select. And, even if you do, you may not draw the sample correctly or accurately. And, even if you do, they may not all come and they may not all stay.
284
285
286
Sampling Error
Sampling error gives us some idea of the precision of our statistical estimate.
A low sampling error means that we had relatively less variability or range in the sampling distribution.
287
We base our calculation on the standard deviation of our sample. The greater the sample standard deviation, the greater the standard error (and the sampling error). The standard error is also related to the sample size. The greater your sample size, the smaller the standard error. Why? Because the greater the sample size, the closer your sample is to the actual population itself. If you take a sample that consists of the entire population you actually have no sampling error because you don't have a sample, you have the entire population. In that case, the mean you estimate is the parameter.
288
probability sampling method is any method of sampling that utilizes some form of random selection.
Each unit in the study population has a known probability of being selected in the population).
289
Some Definitions
N = the number of cases in the sampling frame n = the number of cases in the sample NCn = the number of combinations (subsets) of n from N f = n/N = the sampling fraction
290
The simplest form of random sampling is called simple random sampling It is used when the population is homogenious. Objective: To select n units out of N such that each unit and each NCn has an equal chance of being selected. Procedure: Use a table of random numbers, a computer random number generator, or a mechanical device to select the sample.
291
292
But it requires an accurate list of the study population. The members of the population may be scattered over a large area (makes it more difficult and more expensive to achieve)
293
1.
2.
Sometimes called proportional or quota random sampling. Involves dividing the population into homogeneous subgroups (called strata) and then taking a simple random sample in each subgroup. Procedure: divide the population into non-overlapping groups (i.e., strata) N1, N2, N3, ... Ni, such that N1 + N2 + N3 + ... + Ni = N. Then take a simple random sample f = n/N from each stratum.
294
proportionate stratified random sampling: When we use the same sampling fraction within strata.
disproportionate stratified random sampling: When we use different sampling fractions in the strata
295
Example
Let's say that a population (N = 1000) is composed of three groups: - 850 Caucasians: 85% - 100 African-Americans: 10% - 50 Hispanic-American: 5% If we just did a simple random sample of n =100 with a sampling fraction of 10%, we would expect by chance alone that we would only get 10 and 5 persons from each of our two smaller groups. And, by chance, we could get fewer than that.
296
297
298
Advantages
It is preferred over simple random sampling when the population is composed of different groups of different sizes. It assures that the sample represents not only the overall population, but also key subgroups of the population, especially small minority groups. Stratified random sampling will generally have more statistical precision than simple random sampling. This will only be true if the strata or groups are homogeneous.
299
1.
2.
3. 4. 5.
Steps to follow in order to achieve a systematic random sample: Number the units in the population from 1 to N Decide on the n (sample size) that you want or need k = N/n = the interval size Randomly select an integer between 1 to k Then take every kth unit It is essential that the units in the population are randomly ordered, at least with respect to the characteristics you are measuring.
300
Example
Let's assume that we have a population that only has N=100 people in it and that you want to take a sample of n=20. To use systematic sampling, the population must be listed in a random order. The sampling fraction would be f = 20/100 = 20%. in this case, the interval size, k, is equal to N/n = 100/20 = 5. Now, select a random integer from 1 to 5. In our example, imagine that you chose 4. Now, to select the sample, start with the 4th unit in the list and take every k-th unit (every 5th, because k=5).
301
302
Advantages
1.
2.
3.
It is fairly easy to do. You only have to select a single random number to start things off. It may also be more precise than simple random sampling. Finally, in some situations there is simply no easier way to do random sampling.
303
The problem with random sampling methods when we have to sample a population that's disbursed across a wide geographic region is that you will have to cover a lot of ground geographically in order to get to each of the units you sampled. Your interviewers are going to have a lot of traveling to do. For this reason cluster or area random sampling was invented.
304
Steps
Divide population into clusters (usually along geographic boundaries) Randomly sample clusters Measure all units within sampled clusters
305
306
5. Multi-Stage Sampling
When we combine sampling methods, we call this multi-stage sampling. For example, we may do cluster sampling as a first stage, then do stratified sampling or simple random sampling within each cluster.
307
The difference between nonprobability and probability sampling is that nonprobability sampling does not involve random selection and probability sampling does. Does that mean that nonprobability samples aren't representative of the population? Not necessarily. But it does mean that nonprobability samples cannot depend upon the rationale of probability theory.
308
At least with a probabilistic sample, we know the odds or probability that we have represented the population well. We are able to estimate confidence intervals for the statistic. With nonprobability samples, we may or may not represent the population well, and it will often be hard for us to know how well we've done so. In general, researchers prefer probabilistic or random sampling methods over nonprobabilistic ones, and consider them to be more accurate and rigorous. However, in applied social research there may be circumstances where it is not feasible, practical or theoretically sensible to do random sampling.
309
We can divide nonprobability sampling methods into two broad types: accidental or purposive.
Most sampling methods are purposive in nature because we usually approach the sampling problem with a specific plan in mind.
The most important distinctions among these types of sampling methods are the ones between the different types of purposive sampling approaches.
310
1.
2. 3.
Used in exploratory research where the researcher is interested in getting an inexpensive approximation of the truth without incurring the cost of time required to select a random sample. Examples: "man on the street" interviews conducted frequently by television news programs to get a quick (although non-representative) reading of public opinion. Ask patients who come into a clinic or a hospital to volunteer in a certain study Take the first 10 people entering a supermarket.
311
Purposive Sampling
In purposive sampling, we sample with a purpose in mind. We usually would have one or more specific predefined groups we are seeking. For instance, have you ever run into people in a mall or on the street who are carrying a clipboard and who are stopping various people and asking if they could interview them? Most likely they are conducting a purposive sample (and most likely they are engaged in market research). For example, they might be looking for Caucasian females between 30-40 years old.
312
They size up the people passing by and anyone who looks to be in that category they stop to ask if they will participate. One of the first things they're likely to do is verify that the respondent does in fact meet the criteria for being in the sample. Purposive sampling can be very useful for situations where you need to reach a targeted sample quickly and where sampling for proportionality is not the primary concern. With a purposive sample, you are likely to get the opinions of your target population, but you are also likely to overweight subgroups in your population that are more readily accessible.
313
In statistics, the mode is the most frequently occurring value in a distribution. In sampling, when we do a modal instance sample, we are sampling the most frequent case, or the "typical" case. In a lot of informal public opinion polls, for instance, they interview a "typical" voter. There are a number of problems with this sampling approach.
314
First, how do we know what the "typical" or "modal" case is? We could say that the modal voter is a person who is of average age, educational level, and income in the population. But, it's not clear that using the averages of these is the fairest (consider the skewed distribution of income, for instance). And, how do you know that those three variables -age, education, income -- are the only or even the most relevant for classifying the typical voter? What if religion or ethnicity is an important discriminator? Clearly, modal instance sampling is only sensible for informal sampling contexts.
315
2. Expert Sampling: Expert sampling involves the assembling of a sample of persons with known or demonstrable experience and expertise in some area. Often, we convene such a sample under the auspices of a "panel of experts." There are actually two reasons you might do expert sampling. First, because it would be the best way to elicit the views of persons who have specific expertise. In this case, expert sampling is essentially just a specific subcase of purposive sampling.
316
The other reason you might use expert sampling is to provide evidence for the validity of another sampling approach you've chosen. For instance, let's say you do modal instance sampling and are concerned that the criteria you used for defining the modal instance are subject to criticism. You might convene an expert panel consisting of persons with acknowledged experience and insight into that field or topic and ask them to examine your modal definitions and comment on their appropriateness and validity. The advantage of doing this is that you have some acknowledged experts to back you. The disadvantage is that even the experts can be wrong.
317
people nonrandomly according to some fixed quota. There are two types of quota sampling: proportional and non proportional. In proportional quota sampling you want to represent the major characteristics of the population by sampling a proportional amount of each. For instance, if you know the population has 40% women and 60% men, and that you want a total sample size of 100, you will continue sampling until you get those percentages and then you will stop. So, if you've already got the 40 women for your sample, but not the sixty men, you will continue to sample men but even if legitimate women respondents come along, you will not sample them because you have already "met your quota."
318
Non-proportional quota sampling is a bit less restrictive. In this method, you specify the minimum number of sampled units you want in each category. here, you're not concerned with having numbers that match the proportions in the population. Instead, you simply want to have enough to assure that you will be able to talk about even small groups in the population. This method is the non-probabilistic analogue of stratified random sampling in that it is typically used to assure that smaller groups are adequately represented in your sample.
319
4. Heterogeneity Sampling: We sample for heterogeneity when we want to include all opinions or views, and we aren't concerned about representing these views proportionately (sampling for diversity). our primary interest is in getting broad spectrum of ideas, not identifying the "average" or "modal instance" ones (we sampling ideas not people). We imagine that there is a universe of all possible ideas relevant to some topic and that we want to sample this population, not the population of people who have the ideas. Clearly, in order to get all of the ideas, and especially the "outlier" or unusual ones, we have to include a broad and diverse range of participants. Heterogeneity sampling is, in this sense, almost the opposite of modal instance sampling.
320
5. Snowball Sampling: In snowball sampling, you begin by identifying someone who meets the criteria for inclusion in your study. You then ask them to recommend others who they may know who also meet the criteria. Snowball sampling is especially useful when you are trying to reach populations that are inaccessible or hard to find. For instance, if you are studying the homeless, you are not likely to be able to find good lists of homeless people within a specific geographical area. However, if you go to that area and identify one or two, you may find that they know very well who the other homeless people in their vicinity are and how you can find them.
321
Validity
Do the methods and tools truly measure what they are intended to measure?
Internal External Statistical
http://nc.water.usgs.gov/albe/pics_biology/Seine.gif
Reliability
Do the methods and tools/instruments produce consistent results across multiple observations?
http://www.seagrant.umn.edu/pubs/vgl/medium/270.jpg
Directly taken field notes - from interviews and observations, - Expanded typed notes made as soon as possible after the field work (this includes comments on problems and ideas that arise during each stage of the fieldwork and that will guide further research), - A running record of analysis and interpretation (open coding and axial coding).
-
- The use of multiple sources of evidence, - The establishment of a chain of evidence, - Letting key informants review draft result reports
External validity means establishing the domain to which a study's findings can be generalized. It is ensured through the use of a replication logic Analytical Generalisation
- Relate case findings to existing or emerging bodies of literature, part of which will have been analysed in the literature section of the thesis
must ask the right question Respondents must understand your question Respondents must know the answer Respondents must be willing and able to tell you the answer
2)
3)
Think through your research questions and objectives before you write questions Prepare an analysis plan before you write questions Ask yourself, in relation to points #1 and #2 above, if each question on your list is necessary? Even if the data would be interesting it has to ultimately be used in analysis to make the cut!
2) Those that ask about psychological states or attitudes 3) Those that ask about knowledge
What Is A Good One that yields a truthful, accurate answer Question? dimension One that asks for one answer on one
One that accommodates all possible contingencies of response One that uses specific, simple language One that has mutually exclusive response options One that produces variability in response One that minimizes social desirability One that is pretested
Better question:
Now Im going to read a list of household appliances. As I read each one, please tell me whether or not your household has purchased this type of appliance new from the store during the past 6 months. How about a refrigerator? a kitchen range or oven? a microwave?
Better Question:
Compared to one year ago, are you now paying more, less, or about the same for
a. auto insurance? b. life insurance?
Specify
Specify who, what, when, where and how.
For example, whose income? Whats included? Over what period of time? Example:
In 2002, what was your total household income, before taxes? Please count income from all members of your household, including wages from employment, disability, social security, and public aid.
More Clear:
Compared to your last neighborhood, do you now live closer to your family, are you further from your family, or are you about the same distance?
Respondents will try to represent themselves to the interviewer in a way that reflects positively on them
As
Social Desirability
questions become more threatening, respondents are more likely to overstate or understate behavior, even when the best question wording is used
For socially desirable behavior, it is better to ask whether respondents have ever engaged in the behavior before asking whether they currently engage in the behavior For socially undesirable behavior, it is better to ask about current behavior first, rather than ask about their usual or typical behavior Train interviewers to maintain a professional attitude Self-administered computer-assisted procedures can reduce question threat and improve reporting on sensitive questions Longer questions reduce sensitivity when obtaining information on frequencies of socially undesirable behavior
Categories may be leading to respondents May make it too easy to answer without thinking May limit spontaneity Not best when
asking for frequency of sensitive behaviors there are numerous possible responses
Response categories should be consistent with the question Categories must be exhaustive, including every possible answer Categories must be mutually exclusive (no overlap) If appropriate, include a dont know category
Respondents can generally only remember a maximum of 5 responses unless visual cues are used Using graphic images such as thermometers and ladders and using card sorting for compex ratings is effective Number of points in scale should be determined by how you intend to use the data With scales with few points, every scale can be labeled; in longer scales, only the endpoints are labeled
Ordering Response Categories the lower Usually better to list responses from
level to the higher level Associate greater response levels w/ greater numbers Start with end of a scale that is least socially desirable
Common practice is to omit it to push respondents (Rs) toward one end or the other, on the theory that few individuals are truly in the middle on a particular issue Evidence from empirical studies shows that use of an explicit middle alternative will often be taken by Rs in a forced choice situation if offered; at the same time, it does not affect the ratio of pro to con responses or the size of the dont know category Our usual recommendation is to include it unless there are persuasive reasons to exclude
Part 2:
Segment by topic Ask about related topics together Salient questions take precedence over less salient ones Ask recall backwards in time Use transitions when changing topics give a sense of progress through the questionnaire Leave objectionable questions (e.g., income) for the end Put demographic questions at the end
Start with easy questions that all respondents can answer with little effort First questions should also be non-threatening Dont start with knowledge or awareness questions First questions should be directly related to the topic as described in the introduction or advance/cover letter
Careful formatting is necessary to decrease errors and increase motivation Respondents needs must always take priority, followed by interviewer and data processors
Number all questions sequentially Use large, clear type; dont crowd White space: Place more blank space between questions than between subcomponents of questions List answer categories vertically instead of horizontally Avoid double/triple banking of response choices Be consistent with direction of response categories Be consistent with placement of response categories
Dont split questions across pages. If necessary (e.g., question requires 1.5 pages), restate question and response categories on next page Put special instructions on questionnaire as needed, next to question Distinguish directions from questions Precode the questionnaire (vs. check boxes)
Mail questionnaires
Include a cover letter and contact information if the respondent needs help Use a booklet format
Easier to turn pages Prevents lost pages Permits double-page formats Looks more professional Include a title, graphic, name/address of sponsor on cover
Preferable to test the questionnaires with people like those in your main study study population Test in same mode to be used for main study Consider cognitive pretesting
Think-aloud interviews Revise/eliminate questions Prepare interviewer instructions for pilot test Pilot test (10-20 cases) Revise eliminate questions based on respondent & interviewer comments
17.
Pilot test again, if necessary Prepare final interviewer instructions Be prepared to modify questionnaires if interviewer training raises problems After interviewing is complete, debrief interviewers for potential problems Use experience from one study for future planning
Measurement error
Error variance--the extent of variability in test scores that is attributable to error rather than a true measure of behavior.
Validity
Reliability
Stability and consistency of the measuring instrument. A measure can be reliable without being valid, but it cannot be valid without being reliable.
Validity
Face validity
Just on its face the instrument appears to be a good measure of the concept. intuitive, arrived at through inspection
e.g. Concept=pain level Measure=verbal rating scale rate your pain from 1 to 10. Face validity is sometimes considered a subtype of content validity.
Content validity
Content of the measure is justified by other evidence, e.g. the literature. Entire range or universe of the construct is measured. Usually evaluated and scored by experts in the content area. A CVI (content validity index) of .80 or more is desirable.
Construct validity
Sensitivity of the instrument to pick up minor variations in the concept being measured.
Can an instrument to measure anxiety pick up different levels of anxiety or just its presence or absence? Measure two groups known to differ on the construct.
Ways of arriving at construct validity Hypothesis testing method Convergent and divergent Multitrait-multimatrix method Contrasted groups approach
Concurrent validity
Correspondence of one measure of a phenomenon with another of the same construct.(administered at the same time)
Two tools are used to measure the same concept and then a correlational analysis is performed. The tool which is already demonstrated to be valid is the gold standard with which the other measure must correlate.
Predictive validity
The ability of one measure to predict another future measure of the same concept.
If IQ predicts SAT, and SAT predicts QPA, then shouldnt IQ predict QPA (we could skip SATs for admission decisions) If scores on a parenthood readiness scale indicate levels of integrity, trust, intimacy and identity couldnt this test be used to predict successful achievement of the devleopmental tasks of adulthood?
The researcher is usually looking for a more efficient way to measure a concept.
Concurrent and predictive validity are often listed as forms of criterion related validity.
Reliability
Homogeneity, equivalence and stability of a measure over time and subjects. The instrument yields the same results over repeated measures and subjects.
Expressed as a correlation coefficient (degree of agreement between times and subjects) 0 to +1. Reliability coefficient expresses the relationship between error variance, true variance and the observed score. The higher the reliability coefficient, the lower the error variance. Hence, the higher the coefficient the more reliable the tool! .70 or higher acceptable.
Stability
The same results are obtained over repeated administration of the instrument.
Test-restest reliability parallel, equivalent or alternate forms
Test-Retest reliability
The administration of the same instrument to the same subjects two or more times (under similar
conditions--not before and after treatment)
Parallel or alternate forms of a test are administered to the same individuals and scores are correlated. This is desirable when the researcher believes that repeated administration will result in test-wiseness
Homogeneity
Each item on an instrument is correlated to total score--an item with low correlation may be deleted. Highest and lowest correlations are usually reported.
Items are divided into two halves and then compared. Odd, even items, or 1-50 and 51-100 are two ways to split items.
Estimate of homogeneity when items have a dichotomous response, e.g. yes/no items. Should be computed for a test on an initial reliability testing, and computed for the actual sample. Based on the consistency of responses to all of the items of a single form of a test.
Cronbachs alpha
Likert scale or linear graphic response format. Compares the consistency of response of all items on the scale. May need to be computed for each sample.
Equivalence
Consistency of agreement of observers using the same measure or among alternate forms of a tool.
Parallel or alternate forms (described under stability) Interrater reliability
Intertater reliability
Used with observational data. Concordance between two or more observers scores of the same event or phenomenon.
Critiquing
Was reliability and validity data presented and is it adequate? Was the appropriate method used? Was the reliability recalculated for the sample? Are the limitations of the tool discussed?
8/2/2012
Marian College
380
Understanding basic research principles and methods Becoming familiar with educational statistics
Basic descriptive & inferential statistics Using SPSS to analyze data
Marian College
381
Descriptive Statistics*
Measures of variance
Range Standard deviation
Marian College
382
Are black drivers more likely to be ticketed for speeding than white drivers? (racial profiling?)
Does hormone-replacement therapy (HRT) do more harm than good for women with PMS? (estrogen study, July 2002)
Marian College
Sample Statistics
383
Variables*
Marian College
384
To compare two groups that are mutually exclusive (e.g. experimental vs. control groups, males vs. females)
385
Marian College
Sampling*
Each individual in the population has an equal & independent chance to be selected.
stratified sampling
Marian College
Marian College
For studies of significant consequence If the sample is very diversified Minute differences are expected For longitudinal studies If you are to have subgroup analyses Attrition of subjects are anticipated Test measures are unreliable Variables are complex and difficult to control
387
Chi-square is a non-parametric test (n > 5 per cell) The data are in frequency counts Compare observed frequencies vs. expected frequencies Arrange data in 2 X 2, 2 X 3 contingency tables Use Crosstab function in the SPSS program to obtain crosstabulations and chi squares
Marian College
388
RESEARCH DESIGN
Planning the Research Design
The complete sequential steps the researcher undertakes in order to achieve the goal of the study (Cadornigara, 2002) An intelligent plan of the researcher in pursuing the goals of the study. It describes in sufficient detail the procedures employed in the research so it can be evaluated and repeated if necessary.
CONTENT
1.
2. 3.
4.
5.
It includes the following: All processes done during actual experimentation All materials and amounts used in the study Description of experimental and control set-ups Kind of data gathered Number of trials and replicates done
6. Description of the samples and reference population 7. Management of sample plants and/or animals 8. Sampling techniques 9. Identification and classification of variables 10. Chemical, physical and microbiological analyses of samples 11. Manner of data collection, organization and processing 12. Statistical analysis (test of significance) 13. Limitation in the methods that have been discovered during the study
DESCRIPTIVE METHOD Involves a careful study, observation and detailed description of living or nonliving things and phenomena as they occur in nature. Includes studies that make comparisons and evaluations of science concepts, techniques and procedures.
396
What is Methodology? Types of Research Studies Formulating Research Questions and Objectives Research Designs Research Designs contd Sampling Strategies Sampling and Non-Sampling Errors Methodology Limitations Measurement Scales Designing Data Collection Tools Basic Qualitative Analysis Writing Research Proposals
397
Methodology
A set of procedures for the purpose of answering a research question(s) that describes:
How study participants will be selected How you will analyze the data How and when you will gather data from participants
398
399
Types of Research
Research Types
Exploratory Conclusive
Descriptive
Causal
Experimental
Observational
400
Exploratory Research
Conducted as the first step in determining appropriate action; Helps clearly outline the information needed as part of any future research; Tends to rely on qualitative research techniques such as in-depth case studies, one-onone interviews, and focus groups.
401
What are some of the exploratory research studies that you have conducted?
402
403
Conclusive Research
Conclusive research tends to be quantitative research It can further be sub-divided into two major categories: descriptive and causal.
404
Conclusive Research
Descriptive Research Provides data (usually quantitative) about the population being studied. It can only describe the situation, not what caused it.
405
Conclusive Research
Causal Research To determine whether there is a cause and effect relationship between variables To determine whether a specific independent variable is producing an effect on another dependent variable.
406
Causal Research
There are two types of causal research: Experimental Observational (quasiexperimental)
407
Causal Research
There are two types of causal research: Experimental Observational (quasiexperimental)
Experimental and observational studies try to demonstrate a causal relationship between two variables.
408
Causal Research
Experimental Research: In experimental studies, units (people, etc.) are put into control or exposure groups by the researcher.
409
Causal Research
Observational Research: In an observational study, members of the control group are pre-determined. They can be matched according to demographic information to a member of the exposure group.
410
Causal Research
411
Causal Research
Examples of causal research: A drug trial for a new medication that has not yet been approved by the FDA. A study testing the long-term health effects of exposure to high levels of radiation. A study comparing asthma rates among children who live on farms with those living in urban
412
Types of Research
Research Types
Exploratory Conclusive
Descriptive
Causal
Experimental
Observational
413
414
415
416
Research Design
Primary vs. Secondary Data Quantitative vs. Qualitative Longitudinal vs. NonLongitudinal Using Control / Comparison Groups
417
Research Design
418
Data Sources
Primary Data Observations Direct communication with subjects (surveys, interviews, etc.) Secondary Data Existing data sources collected for some other purpose than the proposed study (reports, databases, results of past studies or surveys).
419
420
What are the advantages and disadvantages of qualitative and quantitative research?
421
422
423
Qualitative or Quantitative?
How did you develop your strategic plan? Who participated in the strategy development process? What are the major achievements since this plan was written? What suggestions do you have for improving either the strategy itself or the strategic planning
424
Qualitative or Quantitative?
How satisfied were you with the service you received today?
Completely dissatisfied Somewhat dissatisfied Neither satisfied nor dissatisfied Somewhat satisfied Completely satisfied
425
Qualitative or Quantitative?
How regularly do you review progress against the plan and make adjustments as needed? Once a month 2-3 times a year Annually Every other year Never
426
Qualitative or Quantitative?
427
Qualitative or Quantitative?
428
Qualitative or Quantitative?
Have you or a family member ever attended Disneyland before?
429
Qualitative or Quantitative?
Is there anything else that you would like the management to know about the service that you received today?
430
432
Study of whether cross-border community development projects result in changes of perception among community members of the other ethnic group Clinical drug trial
Study of changes in attitudes among Yerevan residents towards racially mixed marriages
433
From whom you gather your data: Samples and Sampling Strategies
435
436
What is a Sample?
A sample is a finite part of a population whose properties are studied to gain information about the whole.
437
Sampling Strategy
The sampling strategy is the way in which you select units from the population for inclusion into your study.
439
Sampling Frame
A list of all the individuals (units) in the population from which the sample is taken.
Sampling Frame
Examples of Sampling Frames: List of businesses registered with the Chamber of Commerce The phone book List of clients served by a resource center List of labor migrants registered with authorities in a particular city
441
Sampling Frame
What could you use as a sampling frame? A study on CSR practices in Armenia A study on the health of homeless women A study of the reading habits of children between the ages of 6 and 8 in Yerevan A study of the attitudes of
442
Probability Samples
Sampling Strategies Non-Probability Simple Random Probability Stratified Random Systemic Random Cluster Sample
443
444
Probability Sampling
Types: Simple Random: Units are randomly chosen from the sampling frame Stratified Random: Random sampling of units within categories (strata) that are assumed to exist within a population Systemic Random: Number units within the sampling frame and select every 5th, 10th, etc. Cluster Sample: Clusters (each with multiple units) within a sampling frame are randomly selected.
445
Probability Sampling
Cluster Sample: If you want to conduct interviews with hotel managers in NYC about their training needs, you could decide that each hotel in the city represents one cluster, and then randomly select a small number. You could then contact the managers at these properties for interviews. If the subjects to be interviewed are selected randomly within the selected clusters, it is call "two-stage cluster sampling".
446
Probability Sampling
Stratified Random Sample: If you want to conduct interviews with businesses in NYC about their SI practices, you could categorize your list of businesses into small, medium and large. Within each strata you could then randomly select a small number.
447
Non-Probability Samples
Sampling Strategies Probability Convenience Sampling Non-Probability Purposive Sampling Quota Sampling
448
Non-Probability Sampling
Types: Convenience sampling: selection based on availability or ease of inclusion Purposive sampling: selection of individuals from whom you may be inclined to get more data Quota sampling: selection on the basis of categories that are assumed to exist within a population What are some examples of these?
449
Non-Probability Sampling
Types: Convenience sampling: selecting individuals who happen to be walking down the street Purposive sampling: selecting resource center clients that use many services Quota sampling: selecting businesses for a survey that fall into the categories of small,
450
Sample Size
Quantitative Research: A function of the variability or variance one expects to find in the population (standard deviation), and the statistical level of confidence (usually 95%) one wishes to use.
451
452
Reliability
The extent to which results are consistent A study has high reliability if its results hold true across different settings and participants. If a study is reliable its results will generalize to the larger population.
453
Internal Validity
Your research has high internal validity when it has successfully measured what it set out to measure.
454
455
456
Measurement Scales
Types: Nominal Scale Interval Scale Ordinal Scale Ratio Scale
457
Nominal Scales
Categorizes events, attributes or characteristics. Does not express any values or relationships between variables.
458
Nominal Scales
What is your sex? Male Female Unsure / Neither Labeling men as "1" and women as "2" does not mean that women are twice something" when compared to men. Nor does it suggest that 1 is somehow better than 2.
459
Ordinal Scales
Categories have a logical or ordered relationship to each other. The specific amount of difference between points on the scale can not be measured. Very common in marketing, satisfaction and attitudinal research.
460
Ordinal Scales
461
Interval Scales
Rank items in order The distance between points on the scale are equal. No starting or ending points
462
Interval Scales
Example: The Fahrenheit scale is an interval scale since the distance between each degree is equal but there is no absolute zero point.
463
Ratio Scales
Items are ranked in order The scale consists of equidistant points and has a meaningful zero point Each category should be the same size Categories should never overlap
464
Ratio Scales
Examples: age, income, years of participation, etc. What is your age? 0-15 16-30 31-45 60 61+
46-
465
How you gather data: Designing Tools for Gathering Primary Data
467
What tools are you familiar with for gathering primary data?
468
469
Designing Questionnaires
Question Order?
470
Designing Questionnaires
Orient the respondent! First, briefly describe the purpose of the research study, explain how data gathered by the survey will be used, and by whom it will be used. Is the survey confidential or anonymous?
471
Designing Questionnaires
First Questions The first several questions should be relevant to the study itself so that the respondent quickly understands what the survey is about and becomes engaged. The first questions should be straightforward with relatively few categories of response.
472
Designing Questionnaires
First Questions Do not place sensitive questions too early! They can lead the respondent to abandon the survey and result in a high nonresponse rate for the whole survey.
473
Designing Questionnaires
Middle Questions Respondents should be eased into sensitive topics by asking them what they think is important or what they prefer. Do not first ask respondents to agree or disagree with a position or sensitive issue.
474
Designing Questionnaires
Final Questions Put demographic questions at the end of the questionnaire. Since they are easy to answer, they are much better at the end when respondents are getting tired.
475
Designing Questionnaires
Sequence your questions logically! Questions should be grouped in sections Each section should have a logical sequence Avoid making the respondent jump around mentally Help respondents shift gears by introducing a new section. For example, Now, we would like to get your opinion on some related areas".
476
Pre-Testing Questionnaires
Step 1 Administer the tool to a small group of people, who know little or nothing about the research.
477
Pre-Testing Questionnaires
Can they clearly understand what is being asked? Does the flow of the questions make sense? Will other people have difficulty? Which questions in particular might pose problems?
478
Pre-Testing Questionnaires
Step 2 Test your tool with a small number of people from your sampling frame
479
Pre-Testing Questionnaires
Are there too many "neutral", "dont know" or "dont remember" responses? Do you need additional questions relevant to the research? Do you need to provide more space for written responses? Did respondents respond appropriately to open-ended questions? Will other people have difficulty? Which questions in particular might pose problems?
480
481
482
Coding
A code is a name given to the ideas or concepts that emerge from the qualitative data gathered. Open coding is the process of developing codes as you review the transcripts or notes.
483
Coding
How did the heart attack impact your life? I became afraid to stay at home alone. I decided to start taking care of my health. I made a decision to spend more time with my family. I am always afraid that I will have another.
484
Coding
Preliminary Codes: Fear and Change I became afraid to stay at home alone. I decided to start taking care of my health. I made a decision to spend more time with my family. I am always afraid that I will have another.
485
Research questions
Descriptive questions
Where do children ages 10-13 choose to read? How often do they read? How long do they read at a stretch? Where do they go to choose books? What do they look at when choosing a book? What do childrens parents read? How often? Where? For how long? Do/did childrens parents read with them?
Method population
Describe who/what will be studied Describe how the subjects will be recruited/obtained. Discuss your selection criteria/methods Explain/justify the sample size
Method instruments
Questionnaires Field guides Participant screeners Usability test plan Pre- or post-test materials Surveys Other data collection/data recording methods Data processing methods
Read it all, to get a sense of the whole. Jot down any initial categories that seem persistent Pick one document, go through it thoroughly, write category notes in margins. Repeat for a few more documents. Make a list of topics. Do some mapping, to group similar topics. Sort into major, minor, and leftover topics. Code your data, to see if you get interesting patterns. (Use
Rewrite category labels to be very descriptive, very brief. Try to reduce your total number of categories. Make sure category/data mapping is still accurate. Recode data if necessary. Group data for each category, analyze the groups. (Again, recode data if necessary.) Draw final conclusions.
Research methodology
Quantitative Methods Qualitative procedures
Quantitative Methods
A definition
A survey or experiment that provides as output a quantitative or numeric description of some fraction of the population, called the sample.
The survey design The population and sample The instrumentation Variables in the study Data analysis
Sample selection
The instrumentation
Existing New Likert scale: Rating the Items. 1-to-5 rating scale where:
1. 2.
Rating scale
= strongly unfavorable to the concept = somewhat unfavorable to the concept = undecided = somewhat favorable to the concept = strongly favorable to the concept
3.
4. 5.
Pilot Administration
As an undergraduate? As a postgraduate?
Data analysis
Non-response
Subjects
Selection
Group assignment
Variables
Description Validation
Pilot Content validity Prediction validity
Materials
Type
Pre-experimental
Quasi-experimental
Change groups
Test themes across the data set, where are they common, under what circumstances are they found, not found. This sets the parameters on the interpretation and generalisation of data Get more than one person to analyse the data independently then together
Demonstrate trustworthiness in data analysis
Examples
Biographical continuity Nursing routines as a method of managing a transient workforce
Qualitative research
Interpretative research Process orientated Researcher(s) are the primary data collection instrument Descriptive research Outputs are an inductive process
References
Creswell, J. W. (1994) Research design : qualitative and quantitative approaches. Thousand Oaks, Calif.; London : Sage Publications, ISBN 0803952546
Things won are done; joy's soul lies in the doing. William Shakespeare
Workshop objectives
Research aim and research questions Identify the population and sample Decide how to collect replies Design your questionnaire Run a pilot survey Carry out main survey Analyse the data
What is a questionnaire
A research tool for data collection Its function is measurement (Oppenheim, 1992) The term questionnaire used in different ways:
often refers to self-administered and postal questionnaires (mail surveys) some authors also use the term to describe
Target large amount of people Use to describe, compare or explain Can cover activities and behaviour, knowledge, attitudes, preferences Specific objectives, standardised and highly structured questions Used to collect quantitative data information that can be counted or
Strengths
Can target large number of people Reach respondents in widely dispersed locations Can be relatively low cost in time and money Relatively easy to get information from people quickly Standardised questions Analysis can be straight-forward and
Limitations
Low response rate and consequent bias and confidence in results Unsuitable for some people
Question wording can have major effect on answers Misunderstandings cannot be corrected
Limitations
No opportunities to probe and develop answers No control over the context and order questions are answered No check on incomplete responses Seeks information only by asking, can we trust what people say? e.g. issues with over-reporting
If you were sending out a questionnaire, what would you do to maximise the response rate?
In groups of 3 or 4, 5 minutes
Good design
Thoughtful layout, easy to follow, simple questions, appearance, length, degree of interest and importance, thank people for taking part
Incentives
Small future incentives, e.g. prize draw Understanding why their input is important
Clear specification
In groups of 3 or 4, spend 15 minutes What research question(s) do you think the questionnaire is trying to answer? What are you reactions to:
The question wording and structure? The answer options? Which are open questions and which are closed questions? How could the questions be improved?
Abbreviations Alternative meanings (tea, cool, dinner) Ambiguity and vague wording (fairly, generally, you the respondent, household, family?) Doubled barrelled do you speak English or French? Double negatives
Missing categories include other, dont know and not applicable Sensitive questions Simple language not technical or slang Question ordering Open or closed questions?