Professional Documents
Culture Documents
a very good one] PsycINFO: database good tool to get the info needed without a mess o Go to advanced search page: gives articles, books, conference proceedings o When doing a journal search you have to be very specific as opposed to a catalogue search in the library database o Limit to peer reviewed journals Refworks : go to Library Index, under R is Refworks Google Scholar: good for big picture, aspects, may refer to articles from here later on in research, not for specifics
Chapter 2: The Beginning Stages of Research Where do you get your ideas? Personal interest Observations Reading other s research: see what they ve done and generate ideas based on that o Conferences are very useful to look at current and ongoing research; are useful for undergrad/grad thesis o Sometimes with journals, articles books, the work is a little dated because the writer had have to have gotten the material cleared by publisher a while ago, research could be old Basic research: looks at theories how things work eg. How does memory work o Seeks to understand or make theories Applied research: takes what we know to solve practical problems eg. What sort of memory strategies improve memory o Seeks to apply theories
Common Issues in Choosing Topics No interest Too irrelevant sometimes research has been done again and again, when you re doing research to get published, you want to add new things to theories/subject Too broad or difficult Inadequate previous literature on the topic o Doesn t mean you don t do it if there isn t a lot of material on it before, it just means subject is new and emerging.
Abstract: summarizes the article do not use this abstract as a reference, you have to actually use the article Introduction: the lit. review Methods: how was the research conducted: should be written so that a person can see it and do the research again, step by step in detail how research was conducted Results: statistical analysis Discussion: conclusions, caveats and future directions, criticisms 1/3 for intro; 1/3 for discussion for paper
USE APA FORMAT IN PSYCHOLOGY Use quotes and footnotes very rarely in psych papers
Some Critical Thinking Points Lit Review: up to date, relevant, complete: make sure these articles have to do with your hypothesis Hypothesis: is it related to lit review? Clear with specific predictions Method: clearly outlined so that anyone could replicate, do measures make sense, sample representative Results: appropriate analyses? Discussion: justified conclusions, pitfalls, future directions
Chapter Three Levels of Measurement Once you know what you are going to research, you have to decide how you are going to measure it! Some variables are easier than others, eg. Height vs. motivation o The definition of your variables defines how you re going to measure them For examples, think of three ways we could measure your success in this course? (grades, participation, application in real world, how much you enjoyed the course) now let s look at which category we would classify your answers
Types of Measurement Scales Measurement = Process of Assigning Numbers to Things on a Scale According to Rules o Nominal : Kinds or types, categorical data (yes, no| girl, boy| red birds, blue birds) o Ordinal : Nominal +ranking order (good, better| number scale) o Interval: Ordinal + meaningful equal increments (Thermometer) number and increments are meaningful 0 score doesn t mean you don t have anything (eg. Golf score, temperature) o Ratio: Interval + true zero (money)
Precision
Sometimes you have no choice as to what kind of scale you se, but if you do, you may want to consider precision of information/results scales yield it depends on your research question Eg. What sort of scale ca we use to find out if people believe in capital punishment?
Precision Sometimes you have no choice as to what kind of scale you use, but if you do, you may want to consider precision of information/results scales yield it depends on your research question Eg. What sort of scale can we use to find out if people believe in capital punishment? Also beware of too difficult or too easy which produce floor and ceiling effects o Don t get a sense of people s knowledge if you test something that s too hard: eg. Instructions are too hard to understand
Errors When we measure something how do we know that we are getting not the true score but We get three things o Observed score o Systematic error (method error) results from error in testing situations: you can identify this and fix it in the future o Random error (trait error) often can t identify it and can t account for it
Judging Measurements: Error Error Variance o Also called Random not predictable o Changes in the DV not associated with the IV Want to try and have the least random error as possible, errors that you can t pinpoint and don t know how to deal with Systematic Error o Variations in scores due to predictable outside influences eg. Anxiety on tests, unmotivated person doing IQ test o We can measure, predict and control these errors
Variables Aspect of a testing condition that can change or take on different characteristics with different conditions Types of variable o Dependent variable (DV) o Independent variable (IV) o Confounding variables: variable that s part of your study that you re not interested in studying weather, IQ level of participants, motivation level easier to pinpoint
Extraneous variables : confounding variables that work with IV to change results; becomes very difficult can t find the error easily Eg. Relationship an athlete has with coach when the relationship with parents overlaps
Measurement Options Self report: issues with self- report: lying, exaggerations etc. Physiological : lie detector test, people do cheat on it by controlling body: often the same physical response has many psychological reasons Observations/behavior measurement: make sure you have a very precise coding scheme With all methods have to be careful of experimenter bias, especially with observational measurement o Sometimes participant changes behavior because they are being observed o Blind study to deal with this: Single blind: participant is not aware of what the expectations are, researcher knows whats going on Double blind: participant and researcher don t know y Anxiety study, giving anxiety and placebo pills to participants o Single : researcher knows who got what kind of pill o Double: researcher doesn t know who got what pill
**multiple methods can be used Validity Validity: are you measuring what you are supposed to measuring Eg. iQ testing: criticism is that as we define intelligence now, IQ is not a valid test of measuring intelligence as we define it Face validity o Unscientific type of validity, first check on a measure: you give the measure to experts in the field and you ask them if it makes sense: not running stats or running analysis on it. Concurrent validity o Do the scores from one measure correlate with another established measure in that construct: we expect them to be related Predictive validity o Presumably what you are measuring will be able to predict other variables Construct validity o Your scores should change as other related variables are introduced See if increasing more people, making a place more crowded and increasing the temperature of a place results in aggression
Convergent validity o When you measure the same construct in different ways you get similar scores Divergent validity o Make sure you re not measuring two constructs with one questionnaire
Reliability Measurement gives the same result on different occasions: save for when there is a characteristic where we expect change Recall error? Reliability is high when error is low
Test Retest Reliability o o o o o Same test gives the same score on different occasions The relation between our same measure taken 2 different times - do you get same result/score 2nd time? How far apart do you retest at? If too close practice effects (IQ test, remember answer or give same answer regardless If too far
Parallel Forms Reliability Practice effects One way researchers deal with practice effects Is by using different forms of the test on different testing occasions If subject ends up with similar scores, we have reliability
Inter Rater Reliability Consistency from rater to rater Coding consistency Look for correlations of about .75, anything above that is good
Split Half Reliability: compute a score for each half of a test and calculate consistency between scores Can a measure be valid but not reliable? No, o If a measure is valid, it s measuring what it s supposed to be, if you re getting different results every time you re measuring
Can a measure be reliable but not valid? Yes. o You could be getting the same results over and over but not be measuring what you intend to measure
Practice: Scales that variables would be measured on o Age: ratio o Height: ratio o Sex: nominal o Weight: ratio o Political prefence: nominal o College major: nominal o Career goals: nominal o Socioeconomic statu: all of them o Attitude survey: nominal, ordinal, interval