You are on page 1of 35

NAME: Adaobi Chika Eze REG No: ESUT/PG/M.

Sc/10/11348 PSY: 770 ORIGIN TESTING PSYCHOLOGICAL TESTS A psychological test is an instrument designed to measure unobserved constructs, also known as latent variables. Psychological tests are typically, but not necessarily, a series of tasks or problems that the respondent has to solve. Psychological tests can strongly resemble questionnaires, which are also designed to measure unobserved constructs, but differ in that psychological tests ask for a respondent's maximum performance whereas a questionnaire asks for the respondent's typical performance. A useful psychological test must be both valid (i.e., there is evidence to support the specified interpretation of the test results and reliable (i.e., internally consistent or give consistent results over time, across raters, etc.). It is important that people who are equal on the measured construct also have an equal probability of answering the test items correctly. For example, an item on a mathematics test could be "In a soccer match two players get a red card; how many players are left in the end?"; however, this item also requires knowledge chance of of soccer correctly to be answered items correctly, not just item mathematical ability. Group membership can also influence the answering (differential AND DEVELOPMENMT OF PSYCHOLOGICAL

functioning).

Often

tests

are

constructed

for

specific

population, and this should be taken into account when administering tests. If a test is invariant to some group difference (e.g. gender) in one population (e.g. England) it does not automatically mean that it is also invariant in another population (e.g. Japan). PSYCHOLOGICAL ASSESSMENT Psychological assessment is similar to psychological testing but usually involves a more comprehensive assessment of the individual. Psychological assessment is a process that involves the integration of information from multiple sources, such as tests of normal and abnormal personality, tests of ability or intelligence, tests of interests or attitudes, as well as information from personal interviews. Collateral information is also collected about personal, occupational, or medical history, such as from records or from interviews with parents, spouses, teachers, or previous therapists or physicians. A psychological test is one of the sources of data used within the process of assessment; usually more than one test is used. Many psychologists do some level of assessment when providing services to clients or patients, and may use for example, simple checklists to assess some traits or symptoms, but psychological assessment is a more complex, detailed, in-depth process. Typical types of focus for psychological assessment are to provide a diagnosis for treatment settings; to assess a particular area of functioning or disability often for school settings; to help select type of treatment or to assess treatment outcomes; to help courts decide issues such as child custody or competency to stand trial; or to help assess job

applicants

or

employees

and

provide

career

development

counseling or training.

THE IMPORTANCE OF TESTING Tests are used in almost every nation on earth for counseling, selection, and placement. Testing occurs in settings as diverse as schools, civil service, industry, medical clinics, and counseling centers. Most persons have taken dozens of tests and thought nothing of it. Yet, by the time the typical individual reaches retirement age, it is likely that psychological test results will help shape his or her destiny. The deflection of the life course by psychological test results might be subtle, such as when a prospective mathematician qualifies for an accelerated calculus course based on tenth-grade achievement scores. More commonly, psychological test results alter individual destiny in profound ways. Whether a person is admitted to one college and not another, offered one job but refused a second, diagnosed as depressed or notall such determinations rest, at least in part, on the meaning of test results as interpreted by persons in authority. Put simply, psychological test results change lives. For this reason it is prudentindeed, almost mandatorythat students of psychology learn about the contemporary uses and occasional abuses of testing. In Case Exhibit 1.1, the life-altering aftermath of psychological testing is illustrated by means of several true case history examples. The importance of testing is also evident from historical review. Students of psychology generally regard historical issues as dull,

dry, and pedantic, and sometimes these prejudices are well deserved. After all, many textbooks fail to explain the relevance of historical matters and provide only vague sketches of early developments in mental testing. As a result, students of psychology often conclude incorrectly that historical issues are boring and irrelevant. In reality, the history of psychological testing is a captivating story that has substantial relevance to present-day practices. Historical developments are pertinent to contemporary testing for the following reasons: THE CONSEQUENCES OF TEST RESULTS The importance of psychological testing is best illustrated by example. Consider these brief vignettes: A shy, withdrawn 7-year-old girl is administered an IQ test by a school psychologist. Her score is phenomenally higher than the teacher expected. The student is admitted to a gifted and talented program where she blossoms into a self-confident and gregarious scholar. Three children in a family living near a lead smelter are exposed to the toxic effects of lead dust and suffer neurological damage. Based in part on psychological test results that demonstrate impaired intelligence and shortened attention span in the children, the family receives an $8 million settlement from the company that owns the smelter. A candidate for a position as police officer is administered a personality inventory as part of the selection process. The test

indicates that the candidate tends to act before thinking and resists supervision from authority figures. Even though he has excellent training and impresses the interviewers, the candidate does not receive a job offer. A student, unsure of what career to pursue, takes a vocational interest inventory. The test indicates that she would like the work of a pharmacist. She signs up for a prepharmacy curriculum but finds the classes to be both difficult and boring. After three years, she abandons pharmacy for a major in dance, frustrated that she still faces three more years of college to earn a degree. PSYCHIATRIC ANTECEDENTS OF PSYCHOLOGICAL TESTING Most historians trace the beginnings of psychological testing to the experimental investigation of individual differences that flourished in Germany and Great Britain in the late 1800s. There is no doubt that early experimentalists such as Wilhelm Wundt, Francis Galton, and James McKeen Cattell laid the foundations for modern-day testing, and we will review their contributions in detail. But psychological testing owes as much to early psychiatry as it does to the laboratories of experimental psychology. In fact, the examination of the mentally ill around the middle of the nineteenth century resulted in the development of numerous early tests

(Bondy, 1974). These early tests featured the absence of standardization and were consequently relegated to oblivion. They were nonetheless influential in determining the course of psychological testing, so it is important to mention a few typical developments from this era. In 1885, the German physician Hubert von Grashey developed the antecedent of the memory drum as a means of testing braininjured patients. His subjects were shown words, symbols, or pictures through a slot in a sheet of paper that was moving slowly over the stimuli. Grashey found that many patients could recognize stimuli in their totality but could not identify them when shown. THE BRASS INSTRUMENTS ERA OF TESTING Experimental psychology flourished in the late 1800s in

continental Europe and Great Britain. For the first time in history, psychologists departed from the wholly subjective and introspective methods that had been so fruitlessly pursued in the preceding centuries. Human abilities were instead tested in laboratories. Researchers used objective procedures that were capable of replication. Gone were the days when rival laboratories would have raging arguments about imageless thought, one group saying it existed, another group saying that such a mental event was impossible. Even though the new emphasis on objective methods and measurable quantities was a vast improvement

over the largely sterile mentalism that preceded it, the new experimental psychology was itself a dead end, at least as far as psychological testing was concerned. The problem was that the early experimental psychologists mistook simple sensory processes for intelligence. They used assorted brass instruments to measure sensory thresholds and reaction times, thinking that such abilities were at the heart of intelligence. Hence, this period is sometimes referred to as the Brass Instruments era of psychological testing. In spite of the false start made by early experimentalists, at least they provided psychology with an appropriate methodology. Such pioneers as Wundt, Galton, Cattell, and Clark Wissler showed that it was possible to expose the mind to scientific scrutiny and measurement. This was a fateful change in the axiomatic assumptions of psychology, a change that has stayed with us to the current day. Most sources credit Wilhelm Wundt (1832 1920) with founding the first psychological laboratory in 1879 in Leipzig, Germany. It is less well recognized that he was measuring mental processes years before, at least as early as 1862, when he experimented with his thought meter (Diamond, 1980). This device was a calibrated pendulum with needles sticking off from each side. The pendulum would swing back and forth, striking bells with the needles. The observers task was to take note of the position of the pendulum when the bells sounded. Wundt could adjust the needles beforehand and thereby know the precise position of the pendulum when each bell was struck. Wundt

thought that the difference between the observed pendulum position and the actual position would provide a means of determining the swiftness of thought of the observer. Wundts analysis was relevant to a longstanding problem in astronomy. The problem was that two or more astronomers simultaneously using the same telescope (with multiple eyepieces) would report different crossing times as the stars moved across a grid line on the telescope. Even in Wundts time, it was a well-known event in the history of science that Kinnebrook, an assistant at the Royal Observatory in England, had been dismissed in 1796 because his stellar crossing times were nearly a full second too slow (Boring, 1950). Wundts analysis offered another explanation that did not assume incompetence on the part of anyone. Put simply, Wundt believed that the speed of thought might differ from one person to the next: For each person there must be a certain speed of thinking, which he can never exceed with his given mental constitution. But just as one steam engine can go faster than another, so this speed of thought will probably not be the same in all persons. (Wundt, 1862, as translated in Rieber, 1980) DIFFERENTIATE BETWEEN PSYCHOLOGICAL TESTING AND PSYCHOLOGICAL ASSESSMENT Psychological Testing is "An objective and standardized measure of a sample of behavior" while Psychological Assessment is "An extremely complex process of solving problems (answering questions) in which psychological tests are often used as one of the methods of collecting relevant data or the attempt of a skilled professional, usually a psychologist, to use the techniques and tools of psychology to learn either general or specific facts about

another person, either to inform others of how they function now, or to predict their behavior and functioning in the future.. There are basically seven types of tests: 1. Group educational tests such as the California Achievement Test 2. Ability and preference tests such as the Myers-Briggs 3. LD and neuropsychology tests such as the Halstead Reitan Battery 4. Individual intelligence tests such as the WAIS and WISC 5. Readiness tests such as the Metropolitan Readiness Tests 6. Objective personality tests such as the MMPI2 or PAI 7. Self-administered, scored, and interpreted tests, such as data base user qualification tests There are generally three parties involved in testing according to the Standards for Educational and Psychological Testing, though this could become four: Test Developer - This may be a company, an individual, a school.... The Test Developer has certain responsibilities in developing, marketing, distributing tests and educating test users. Test User - This may be a counselor, a clinician, a personnel official.... The Test User has certain responsibilities in selecting, using, scoring, interpreting, and utilizing tests.

Test Taker - This may be the client in many cases. The Test Taker has certain rights regarding tests, their use, and the information gained from them. Test Utilizer - may be the test taker, but in other cases however, a business or organization may send a person to be tested. Thus, the organization also has certain rights regarding tests, their use, and the information gained from them. The Test Developer should Construct a manual containing all relevant information, such as the development and purpose of the test information on standardized administration and scoring data on the collection and composition of the standardization sample information on the test reliability and validity adequate information for the educated consumer to determine the appropriate and inappropriate use of the test references to relevant published research regarding the test and its use information on correct interpretation and application and possible sources of misuse, as well as any bias in test construction or use. Support the information provided with data. Adhere to all ethical guidelines regarding advertising, distributing, and marketing testing material. The Test User should Be aware of the limits of tests, in regards to reliability, validity, standard error of measurement, confidence intervals, as well as appropriate interpretation and use of the instrument. If you have any questions about tests, consult the Mental Measurement

Yearbook, Tests in Print, or the 1984 Joint Technical Standards for Educational and Psychological Testing. Read the manual and understand all relevant information Be responsible for assessing your own competence regarding use of a test or the competence of those you employ for that purpose adhering to the appropriate use of the test as stated in the manual being aware of any test bias or client characteristics that might decrease the validity of the test results or interpretation and report it with the testing report of selection, data, interpretation, and application. Protect test security where such security is vital to test reliability and validity. Be aware of the dangers of automated testing services and realize that they are to be used only by professionals. Inform the client to be tested as to the purpose and potential use and applicability of the testing materials and results, as well as who will potentially have access to the results. The test user has the responsibility to see that the results are made available and used only for and by those specified in the consent agreement. Obsolete information should be regularly purged from records. Good test use Good test use requires: 1. Comprehensive assessment using history and test scores 2. Acceptance of the responsibility for proper test use 3. Consideration of the Standard Error of Measurement and other psychometric knowledge

4. Maintaining integrity of test results (such as the correct use of cut-off scores) 5. Accurate scoring 6. Appropriate use of norms 7. Willingness to provide interpretive feedback and guidance to test takers A good test is both reliable and valid, and has good norms. Reliability, briefly, refers to the consistency of the test results. For example, IQ is not presumed to vary much from week to week, and as such, test results from an IQ test should be highly reliable. On the other hand, transient mood states do not last long, and a measurement of such moods should not be very reliable over long periods of time. A measurement of transient mood state may still be shown reliable if it correlates well with other tests or behavior observations indicative of transient mood states. Validity, briefly, refers to how well a test measures what it says it does. In a simple way, validity tells you if the hammer is the right tool to fix a chair, and reliability tells you how good a hammer you have. A test of intelligence based on eye color (blue eyed people are more intelligent than brown eyed people) would certainly be reliable, because eye color doesn't change, but it would not be very valid, because IQ and eye color have little to do with each other. Norms are designed to tell you what the result of measurement (a number) means in relation to other results (numbers). The "normative sample" should be very representative of the sample

of people who will be given the test. Thus, if a test is to be used on the general population, the normative sample should be large, include people from ethnically and culturally diverse backgrounds, and include people from all levels of income and educational status.

Psychological tests fall into several categories: Achievement and aptitude tests are usually seen in educational or employment settings, and they attempt to measure either how much you know about a certain topic (i.e., your achieved knowledge), such as mathematics or spelling, or how much of a capacity you have (i.e., your aptitude) to master material in a particular area, such as mechanical relationships. Intelligence tests attempt to measure your intelligencethat is, your basic ability to understand the world around you, assimilate its functioning, and apply this knowledge to enhance the quality of your life. Or, as Alfred Whitehead said about intelligence, it enables the individual to profit by error without being slaughtered by it.[1] Intelligence, therefore, is a measure of a potential, not a measure of what youve learned (as in an achievement test), and so it is supposed to be independent of culture. The challenge is to design a test that can actually be culture-free; most intelligence tests fail in this area to some extent for one reason or another. Neuropsychological tests attempt to measure deficits in cognitive functioning (i.e., your ability to think, speak, reason, etc.) that

may result from some sort of brain damage, such as a stroke or a brain injury. Occupational tests attempt to match your interests with the interests of persons in known careers. The logic here is that if the things that interest you in life match up with, say, the things that interest most school teachers, then you might make a good school teacher yourself. Personality tests attempt to measure your basic personality style and are most used in research or forensic settings to help with clinical diagnoses. Two of the most well-known personality tests are the Minnesota Multiphasic Personality Inventory (MMPI), or the revised MMPI-2, composed of several hundred yes or no questions, and the Rorschach (the inkblot test), composed of several cards of inkblotsyou simply give a description of the images and feelings you experience in looking at the blots. Specific clinical tests attempt to measure specific clinical matters, such as your current level of anxiety or depression.

Psychological assessment 1. Frequently uses tests 2. Typically does not involved defined procedures or steps

3. Contributes to some decision process to some problem, often by redefining the problem, breaking the problem down into smaller pieces, or highlighting some part(s) of the problem< 4. Requires the examiner to consider, evaluate, and integrate the data 5. Produces results that can not be evaluated solely on psychometric grounds 6. Is less routine and inflexible, more individualized. The point of assessment is often diagnosis or classification. These are the act of placing a person in a strictly or loosely defined category of people. This allows us to quickly understand what they are like in general, and to assess the presence of other relevant characteristics based upon people similar to them. There are several parts to assessment. The Interviewer Note that an interview can be conducted in many ways and for a variety of purposes. Below are several aspects in which to view an interview. Verbal and face-to-face - what does the client tell you? How much information are they willing/able to provide? para-verbal- how does the client speak? At normal pace, tone, volume, inflection? What is their command of English, how well do they choose their words? Do they pick up on non-verbal cues for speech and turn taking? How organized is their speech? situation - Is the client cooperative? Is their participation voluntary? For what purpose is the interview conducted? Where is the interview conducted?

There are really two kinds of Interviews, structured or unstructured. Structured - The SCID-R is the Structured Clinical Interview for the DSM-III-R and is, as the name implies, an example of a very structured. It is designed to provide a diagnosis for a client by detailed questioning of the client in a "yes/no" or "definitely/somewhat/not at all" forced choice format. It is broken up into different sections reflecting the diagnosis in question. Often Structured interviews use closed questions, which require a simple pre-determined answer. Examples of closed questions are "When did this problem begin? Was there any particular stressor going on at that time? Can you tell me about how this problem started?" Closed interviews are better suited for specific information gathering. Unstructured - Other interviews can be less structured and allow the client more control over the topic and direction of the interview. Unstructured interviews are better suited for general information gathering, and structured interviews for specific information gathering. Unstructured interviews often use open questions, which ask for more explanation and elaboration on the part of the client. Examples of open questions are "What was happening in your life when this problem started? How did you feel then? How did this all start?" Open interviews are better suited for general information gathering. Interviews can be used for clinical purposes (such as the SCID-R) or for research purposes (such as to determine moral development or ego state).

Behavioral Observations How does the person act? Nervous, calm, smug? What they do and do not do? Do they make and maintain eye contact? How close to you do they sit? Often, behavior observations are some of the most important information you can gather. Behavioral observations may be used clinically (such as to add to interview information or to assess results of treatment) or in research settings (to see which treatment is more efficient or as a DV). IDENTIFY THE VARIABLES CAPABLE OF CONFOUNDING THE OUTCOME OF ASSESSMENT. HOW DO YOU CONTROL THESE VARIABLES Confounding Variables Confounding variables are variables that the researcher failed to control, or eliminate, damaging the internal validity of an experiment. Confounding, interactions, methods for assessment of effect modification Confounding, interactions, methods for assessment of effect modification; Strategies to allow/adjust for confounding in design and analysis While the results of an epidemiological study may reflect the true effect of an exposure on the development of the outcome under investigation, it should always be considered that the findings may in fact be due to an alternative explanation.1 Such alternative explanations may be due to the effects of chance (random error), bias or confounding which may produce spurious results, leading us to conclude the existence of a valid statistical

association when one does not exist or alternatively the absence of an association when one is truly present.1 Observational studies are particularly susceptible to the effects of chance, bias and confounding and all three need to be considered at both the design and analysis stage of an epidemiological study, so their potential effects can be minimised. Confounding, interaction and effect modification Confounding provides an alternative explanation for an association between an exposure (X) and an outcome. It occurs when an observed association is in fact distorted because the exposure is also correlated with another risk factor (Y). This risk factor Y is also associated with the outcome, but independently of the exposure under investigation, X. As a consequence, the estimated association is not that same as the true effect of exposure X on the outcome. An unequal distribution of the additional risk factor, Y, between the study groups will result in confounding. The observed association may be due totally or in part to the effects of differences between the study groups other than the exposure under investigation.1 A potential confounder is any factor that might have an effect on the risk of disease under study. This may include factors with a direct causal link the disease, as well as factors that are proxy measures for other unknown causes, such as age and socioeconomic status.2

In order for a variable to be considered as a confounder: 1. The variable must be independently associated with the outcome (i.e. be a risk factor).

2. The variable must be also associated with the exposure under study in the source population. 3. It should not lie on the causal pathway between exposure and disease. Examples of confounding A study found alcohol consumption to be associated with the risk of Coronary Heart Disease. However, smoking may have confounded the association between alcohol and CHD. Smoking is a risk factor in its own right for CHD, so is independently associated with the outcome, and smoking is also associated with alcohol consumption because smokers tend to drink more than non-smokers. Controlling for the potential confounding effect of smoking may in fact show no association between alcohol consumption and CHD.

Effects of confounding Confounding factors, if not controlled for, cause bias in the estimate of the impact of the exposure being studied. The effects of confounding may result in:

An observed difference between study populations when no real difference exists. An observed difference between study populations when a true association does exist. An underestimate of an effect. An overestimate of an effect.

Controlling for confounding

Confounding can be dealt with either at the study design stage, or at the analysis stage providing sufficient relevant data have been collected. A number of methods can be applied to control for potential confounding factors and the aim of all of them is to make the groups as similar as possible with respect to the confounder. Potential confounding factors may be identified at the design stage based on previous studies or because the factor may be considered as biologically plausible. Controlling for confounding at the design stage

Randomisation (random allocation) This is the ideal method of controlling for confounding because all potential confounding variables, both known and unknown, should be equally distributed in the study groups. It involves the random allocation (e.g. using a table of random numbers) of individuals to study groups. However, this method can only be used in experimental clinical trials.

Restriction Restriction limits participation in the study to individuals who are similar in relation to the confounder. For example if participation in a study is restricted to non-smokers only, any potential confounding effect of smoking will be eliminated. However, a disadvantage of restriction is that it may be difficult to generalize the results of the study to the wider population if the study group is homogenous.1

Matching Matching involves selecting controls so that the distribution of potential confounders (e.g. age or smoking) is as similar as possible to that amongst the cases. In practice this is only utilised in case-control studies, but it can be done in two ways: 1. Pair matching - selecting for each case one or more controls with similar characteristics (e.g. same age and smoking habits)

2. Frequency matching - ensuring that as a group the cases have similar characteristics to the controls Detecting the presence of confounding The presence or magnitude of confounding in epidemiological studies is evaluated by observing the degree of discrepancy between the crude and adjusted estimates.1 One method to assess for the presence of confounding is to calculate the crude relative risk (without controlling for confounding) and compare this measure with the relative risk adjusted for the potential confounder. If the relative risk has changed and there is little variation between the stratum specific rate ratios, then there is evidence of confounding It is inappropriate to use statistical tests to assess the presence of confounding, but the following methods may be used to minimise its effect Controlling for confounding during analysis

Stratification Stratification allows the association between exposure and outcome to be examined within different strata of the confounding variable, for example by age or sex. The strength of the association is initially measured separately within each stratum of the confounding variable. Assuming the stratum specific rates are relatively uniform, they may then be pooled to give a summary estimate of the relative risk adjusted or controlled for the potential confounder. One drawback of this method is that the more the original sample is stratified, the smaller each stratum will become, and the power to detect associations is reduced. Standardisation is an example of stratification.

Multivariate analysis As the number of confounders that can be controlled for simultaneously is limited, particularly as this may lead to small numbers in some strata, statistical modelling (e.g.

logistic regression) is commonly used to control for more that one confounder at the same time. Residual confounding It is only possible to control for confounders in the analysis if data on confounders were accurately collected. Residual confounding occurs when a confounder has not been adequately adjusted for in the analysis. An example would be socioeconomic status, because it influences multiple health outcomes but is difficult to measure accurately.3 Random misclassification of a confounder can result in either an over- or under- estimate of the true effect of the exposure under investigation. Interaction (effect modification) Interaction occurs when the direction or magnitude of an association between two variables differs due to the effect of a third variable. It may reflect a cumulative effect of multiple risk factors which are not acting independently and produce a greater or lesser effect than the sum of the effects of each factor acting on its own. How to Control of Confounding Variables are by: Randomization Matching Adjustment Direct Indirect Mantel-Haenszel Multiple Regression Linear Logistic Poisson Cox (STRATIFIED METHODS)

DISCUSS IN DETAIL THEORIES OF INTELLIGENCE> WHAT ARE THE DIFFICULTIES ENCOUNTERED IN INTELLIGENCE TESTING

Intelligence. A creative person is usually very intelligent in the ordinary sense of the term and can meet the problems of life as rationally as anyone can, but often he refuses to let intellect rule; he relies strongly on intuition, and he respects the irrational in himself and others. Above a certain level, intelligence seems to have little correlation with creativity--i.e., a highly intelligent person may not be as highly creative. A distinction is sometimes made between convergent thinking, the analytic reasoning measured by intelligence tests, and divergent thinking, a richness of ideas and originality of thinking. Both seem necessary to creative performance, although in different degrees according to the task or occupation (a mathematician may exhibit more convergent than divergent thinking and an artist the reverse). Theories of intelligence Theories of intelligence, as is the case with most scientific theories, have evolved through a succession of paradigms that have been put forward to clarify our understanding of the idea. The major paradigms have been those of psychological measurement (often called psychometrics); cognitive psychology, which concerns itself with the mental processes by which the mind functions; the merger of cognitive psychology with contextualism (the interaction of the environment and processes of the mind); and biologic science, which considers the neural bases of intelligence.

Psychometric theories Psychometric theories have generally sought to understand the structure of intelligence: What form does it take, and what are its parts, if any? Such theories have generally been based on and tested by the use of data obtained from paper-and-pencil tests of mental abilities that include analogies (e.g., lawyer : client :: doctor : ?), classifications (e.g., Which word does not belong with the others? robin, sparrow, chicken, bluejay), and series completions (e.g., What number comes next in the following series? 3, 6, 10, 15, 21, ?). Underlying the psychometric theories is a psychological model according to which intelligence is a composite of abilities measured by mental tests. This model is often quantified by assuming that each test score is a weighted linear composite of scores on the underlying abilities. For example, performance on a number-series test might be a weighted composite of number, reasoning, and possibly memory abilities for a complex series. Because the mathematical model is additive, it assumes that less of one ability can be compensated for by more of another ability in test performance. For instance, two people could gain equivalent scores on a number-series test if a deficiency in number ability in the one person relative to the other was compensated for by superiority in reasoning ability. The first of the major psychometric theories was that of the British psychologist Charles E. Spearman, who published his first major article on intelligence in 1904. Spearman noticed what, at the turn of the century, seemed like a peculiar fact: People who did well on one mental ability test tended to do well on the others, and people

who did not do well on one of them also tended not to do well on the others. Spearman devised a technique for statistical analysis, which he called factor analysis, that examines patterns of individual differences in test scores and is said to provide an analysis of the underlying sources of these individual differences. Spearman's factor analyses of test data suggested to him that just two kinds of factors underlie all individual differences in test scores. The first and more important factor Spearman labeled the "general factor," or g, which is said to pervade performance on all tasks requiring intelligence. In other words, regardless of the task, if it requires intelligence, it requires g. The second factor is specifically related to each particular test. But what, exactly, is g? After all, calling something a general factor is not the same as understanding what it is. Spearman did not know exactly what the general factor might be, but he proposed in 1927 that it might be something he labeled "mental energy." The American psychologist L.L. Thurstone disagreed not only with Spearman's theory but also with his isolation of a single factor of general intelligence. Thurstone argued that the appearance of just a single factor was an artifact of the way Spearman did his factor analysis and that if the analysis were done in a different and more appropriate way, seven factors would appear, which Thurstone referred to as the "primary mental abilities." The seven primary mental abilities identified by Thurstone were verbal comprehension (as involved in the knowledge of vocabulary and in reading); verbal fluency (as involved in writing and in producing words); number (as involved in solving fairly simple numerical computation and arithmetical reasoning problems); spatial visualization (as involved in mentally visualizing and manipulating objects, as is required to

fit a set of suitcases into an automobile trunk); inductive reasoning (as involved in completing a number series or in predicting the future based upon past experience); memory (as involved in remembering people's names or faces); and perceptual speed (as involved in rapidly proofreading to discover typographical errors in a typed text). It is a possibility, of course, that Spearman was right and Thurstone was wrong, or vice versa. Other psychologists, however, such as the Canadian Philip E. Vernon and the American Raymond B. Cattell, suggested another possibility--that both were right in some sense. In the view of Vernon and Cattell, abilities are hierarchical. At the top of the hierarchy is g, or general ability. But below g in the hierarchy are successive levels of gradually narrowing abilities, ending with Spearman's specific abilities. Cattell, for example, suggested in a 1971 work that general ability can be subdivided into two further kinds of abilities, fluid and crystallized. Fluid abilities are the reasoning and problem-solving abilities measured by tests such as the analogies, classifications, and series completions described above. Crystallized abilities can be said to derive from fluid abilities and be viewed as their products, which would include vocabulary, general information, and knowledge about specific fields. John L. Horn, an American psychologist, suggested that crystallized ability more or less increases over the life span, whereas fluid ability increases in the earlier years and decreases in the later ones. Most psychologists agreed that a broader subdivision of abilities was needed than was provided by Spearman, but not all of these agreed that the subdivision should be hierarchical. J.P. Guilford, an

American psychologist, proposed a structure-of-intellect theory, which in its earlier versions postulated 120 abilities. For example, in an influential 1967 work Guilford argued that abilities can be divided into five kinds of operations, four kinds of contents, and six kinds of products. These various facets of intelligence combine multiplicatively, for a total of 5 4 6, or 120 separate abilities. An example of such an ability would be cognition (operation) of semantic (content) relations (product), which would be involved in recognizing the relation between lawyer and client in the analogy problem, lawyer : client :: doctor : ?. In 1984 Guilford increased the number of abilities proposed by his theory, raising the total to 150. It had become apparent that there were serious problems with psychometric theories, not just individually but as a basic approach to the question. For one thing, the number of abilities seemed to be getting out of hand. A movement that had started by postulating one important ability had come, in one of its major manifestations, to postulating 150. Because parsimony is usually regarded as one of several desirable features of a scientific theory, this number caused some disturbance. For another thing, the psychometricians, as practitioners of factor analysis were called, didn't seem to have any strong scientific means of resolving their differences. Any method that could support so many theories seemed somewhat suspect, at least in the use to which it was being put. Most significant, however, was the seeming inability of psychometric theories to say anything substantial about the processes underlying intelligence. It is one thing to discuss "general ability" or "fluid ability," but quite another to describe just what is happening in people's minds when they are exercising the ability in question. The cognitive psychologists proposed a solution to these problems,

which was to study directly the mental processes underlying intelligence and, perhaps, relate them to the factors of intelligence proposed by the psychometricians. Cognitive theories During the era of psychometric theories, the study of intelligence was dominated by those investigating individual differences in people's test scores. In an address to the American Psychological Association in 1957, the American psychologist Lee Cronbach, a leader in the testing field, decried the fact that some psychologists study individual differences and others study commonalities in human behaviour but never do the two meet. In Cronbach's address his plea to unite the "two disciplines of scientific psychology" led, in part, to the development of cognitive theories of intelligence and of the underlying processes posited by these theories. Without an understanding of the processes underlying intelligence it is possible to come to misleading, if not wrong, conclusions when evaluating overall test scores or other assessments of performance. Suppose, for example, that a student does poorly on the type of verbal analogies questions commonly found on psychometric tests. A possible conclusion is that the student does not reason well. An equally plausible interpretation, however, is that the student does not understand the words or is unable to read them in the first place. A student seeing the analogy, audacious : pusillanimous :: mitigate : ?, might be unable to solve it because of a lack of reasoning ability, but a more likely possibility is that the student does not know the meanings of the words. A cognitive analysis enables the interpreter of the test score to determine both the degree to which the poor score is due to low reasoning ability and

the degree to which it is a result of not understanding the words. It is important to distinguish between the two interpretations of the low score, because they have different implications for understanding the intelligence of the student. A student might be an excellent reasoner but have only a modest vocabulary, or vice versa. Underlying most cognitive approaches to intelligence is the assumption that intelligence comprises a set of mental representations (e.g., propositions, images) of information and a set of processes that can operate on the mental representations. A more intelligent person is assumed to represent information better and, in general, to operate more quickly on these representations than does a less intelligent person. Researchers have sought to measure the speed of various types of thinking. Through mathematical modeling, they divide the overall time required to perform a task into the constituent times needed to execute each mental process. Usually, they assume that these processes are executed serially--one after another--and, hence, that the processing times are additive. But some investigators allow for partially or even completely parallel processing, in which case more than one process is assumed to be executed at the same time. Regardless of the type of model used, the fundamental unit of analysis is the same: a mental process acting upon a mental representation. A number of cognitive theories of intelligence have evolved. Among them is that of the American psychologists Earl B. Hunt, Nancy Frost, and Clifford E. Lunneborg, who in 1973 showed one way in which psychometrics and cognitive modeling could be combined.

Instead of starting with conventional psychometric tests, they began with tasks that experimental psychologists were using in their laboratories to study the basic phenomena of cognition, such as perception, learning, and memory. They showed that individual differences in these tasks, which had never before been taken seriously, were in fact related (although rather weakly) to patterns of individual differences in psychometric intelligence test scores. These results, they argued, showed that the basic cognitive processes might be the building blocks of intelligence. Following is an example of the kind of task Hunt and his colleagues studied in their research. The experimental subject is shown a pair of letters, such as "A A," "A a," or "A b." The subject's task is to respond as quickly as possible to one of two questions: "Are the two letters the same physically?" or "Are the two letters the same only in name?" In the first pair the letters are the same physically, and in the second pair the letters are the same only in name. The psychologists hypothesized that a critical ability underlying intelligence is that of rapidly retrieving lexical information, such as letter names, from memory. Hence, they were interested in the time needed to react to the question about letter names. They subtracted the reaction time to the question about physical match from the reaction time to the question about name match in order to isolate and set aside the time required for sheer speed of reading letters and pushing buttons on a computer. The critical finding was that the score differences seemed to predict psychometric test scores, especially those on tests of verbal ability, such as verbal analogies and reading comprehension. The testing group concluded that verbally facile people are those who have the underlying ability to

absorb and then retrieve from memory large amounts of verbal information in short amounts of time. The time factor was the significant development here. A few years later, the American psychologist Robert J. Sternberg suggested an alternative approach to studying the cognitive processes underlying human intelligence. He argued that Hunt and his colleagues had found only a weak relation between basic cognitive tasks and psychometric test scores because the tasks they were using were at too low a level. Although low-level cognitive processes may be involved in intelligence, according to Sternberg they are peripheral rather than central. He proposed that psychologists should study the tasks found on the intelligence tests and then determine the mental processes and strategies that people use to perform those tasks. Sternberg began his study with the analogies tasks such as lawyer : client :: doctor : ?. He determined that the solution to such analogies requires a set of component cognitive processes: namely, encoding of the analogy terms (e.g., retrieving from memory attributes of the terms lawyer, client, and so on), inferring the relation between the first two terms of the analogy (e.g., figuring out that a lawyer provides professional services to a client), mapping this relation to the second half of the analogy (e.g., figuring out that both a lawyer and a doctor provide professional services), applying this relation to generate a completion (e.g., realizing that the person to whom a doctor provides professional services is a patient), and then responding. Using techniques of mathematical modeling applied to reaction-time data, Sternberg proceeded to isolate the components of information processing. He determined whether or

not each experimental subject did, indeed, use these processes, how the processes were combined, how long each process took, and how susceptible each process was to error. Sternberg later showed that the same cognitive processes are involved in a wide variety of intellectual tasks, and he suggested that these and other related processes underlie scores on intelligence tests. Other cognitive psychologists have pursued different paths in the study of human intelligence, including the building of computer models of human cognition. Two leaders in this field have been the American psychologists Allen Newell and Herbert A. Simon. In the late 1950s and early 1960s they worked with a computer expert, Clifford Shaw, to construct a computer model of human problem solving. Called the General Problem Solver, it could solve a wide range of fairly structured problems, such as logical proofs and mathematical word problems. Their program relied heavily on a heuristic procedure called "means-ends analysis," which, at each step of problem solving, determined how close the program was to a solution and then tried to find a way to bring the program closer to where it needed to be. In 1972, Newell and Simon proposed a general theory of problem solving, much of which was implemented on the computer. Most of the problems studied by Newell and Simon were fairly well structured, in that it was possible to identify a discrete set of moves that would lead from the beginning to the end of a problem. For example, in logical-theorem proving the final result is known, and what is needed is a discrete set of steps that lead to that solution. Even in chess, another object of study, a discrete set of moves can be determined that will lead from the beginning of a game to

checkmate. The biggest problem for a computer program (or a human player, for that matter) is in deciding which of myriad possible moves will most contribute toward winning a game. Other investigators have been concerned with less well-structured problems, such as how a text is comprehended, or how people are reminded of things they already know when reading a text. All of the cognitive theories described so far have in common their primary reliance on what psychologists call the serial processing of information. Fundamentally, this means that cognitive processes are executed in series, one after another. In solving an algebra problem, for example, first the problem is studied, then an attempt is made to formulate some equations to define knowns and unknowns, then the equations may be used to solve for the unknowns, and so on. The assumption is that people process chunks of information one at a time, seeking to combine the processes used into an overall strategy for solving a problem. For many years, various psychologists have challenged the idea that cognitive processing is primarily serial. They have suggested that cognitive processing is primarily parallel, meaning that humans actually process large amounts of information simultaneously. It has long been known that the brain works in such a way, and it seems reasonable that cognitive models should reflect this reality. It proved, however, to be difficult to distinguish between serial and parallel models of information processing, just as it had been difficult earlier to distinguish between different factor models of human problem, intelligence. and Subsequently researchers, advanced including techniques the of mathematical and computer modeling were brought to bear on this various American

psychologists David E. Rumelhart and Jay L. McClelland, proposed what they call "parallel distributed processing" models of the mind. These models postulated that many types of information processing occur at once, rather than just one at a time. Even with computer modeling, some major problems regarding the nature of intelligence remain. For example, a number of psychologists, such as the American Michael E. Cole, have argued that cognitive processing does not take into account that the description of intelligence may differ from one culture to another and may even differ from one group to another within a culture. Moreover, even within the mainstream cultures of North America and Europe, it had become well known that conventional tests, even though they may predict academic performance, do not reliably predict performance in jobs or other life situations beyond school. It seemed, therefore, that not only cognition but also the context in which cognition operates had to be taken into account.

EXPLAIN THE TERM PERSONALITY, DISCUSS EXHAUSTIVELY THE THEORETICAL EXPLAINATION OF HUMAN PERSONALITY Personality is the particular combination of emotional, attitudinal, and behavioral response patterns of an individual. Many creative people show a strong interest in apparent disorder, contradiction, an and of imbalance; they often seem to consider immature asymmetry and disorder a challenge. At times creative persons give impression psychological imbalance, but personality traits may be an extension of a generalized receptivity to a wider-than-normal range of experience and behaviour patterns.

Such individuals may possess an exceptionally deep, broad, and flexible awareness of themselves. Studies indicate that the creative person is nonetheless an intellectual leader with a great sensitivity to problems. He exhibits a high degree of self-assurance and autonomy. He is dominant and is relatively free of internal restraints and inhibitions. He has a considerable range of intellectual interests and shows a strong preference for complexity and challenge. The unconventionality of thought that is sometimes found in creative persons may be in part a resistance to acculturation, which is seen as demanding surrender of one's personal, unique, fundamental nature. This may result in a rejection of conventional morality, though certainly not in any abatement of the moral attitude.

You might also like