You are on page 1of 19

RESEARCH METHODOLOGY 1. Define Research and explain the process of Research.

The process for doing this is as follows: 1. Problem Identification 2. Research Objectives 3. Data Gathering 4. Data Collection 5. Evaluation of Alternatives 6. Research Techniques / Data Analysis 7. Reporting and presentation of Data The Research Process - Marketing departments need to have information so they can get their marketing mix right. For example, they will want to know what similar products already exist and how much they cost. They will also want to know whether consumers will want to buy their new product, and what they think about it. The process for doing this is as follows: 2. What are the steps involved in formulating a Research problem. A research problem is the first step and the most important requirement in the research process. It serves as the foundation of a research studyf ie. if well formulated, you expect a good study to follow. According the Kerlinger; in order for one to solve a problem, one must know what the problem is. The large part of the problem is knowing what one is trying to do. A research problem and the way you formulate it determines almost every step that follows in the research study. Formiulation of the problem is like the input into the study and the output is the quality of the contents of the research report. Steps involved in formulating a Research Problem are as below: 1. Identify a broad area of interest in your academic /professional field. 2. Dissect the broad area into sub-areas by having a brain storming session with your colleages 3. Select the sub-area in which you would like to conduct your research through the process of elimination. 4. Reverse the research wuestions that you would like to answer through your study. This can be after formulation of the objectives of can lead you to the formulation of the objective 5. Assess these objectives to ascertain the feasibility of attaining them in the light of time and other issues like finances and human resource expertise. 3. What are the different types of Research designs? Listed below is the whole range of research designs that you could use for your dissertation. Historical Research Design - The purpose is to collect, verify, synthesize evidence to establish facts that defend or refute your hypothesis. It uses primary sources, secondary sources, and lots of qualitative data sources such as logs, diaries, official records, reports, etc. The limitation is that the sources must be both authentic and valid. Case and Field Research Design - Also called ethnographic research, it uses direct observation to give a complete snapshot of a case that is being studied. It is useful when not much is known about a phenomenon. Uses few subjects. Descriptive or Survey Research Design - It attempts to describe and explain conditions of the present by using many subjects and questionnaires to fully describe a phenomenon. Survey research design /survey methodology is one of the most popular for dissertation research.There are many advantages. Correlational or Prospective Research Design - It attempts to explore relationships to make predictions. It uses one set of subjects with two or more variables for each. Causal Comparative or Ex Post Facto Research Design - This research design attempts to explore cause and affect relationships where causes already exist and cannot be manipulated. It uses what already exists and looks backward to explain why. Developmental or Time Series Research Design - Data are collected at certain points in time going forward. There is an emphasis on time patterns and longitudinal growth or change. Experimental Research Design - This design is most appropriate in controlled settings such as laboratories. The design assumes random assignment of subjects and random assignment to groups (E and C). It attempts to explore cause and affect relationships where causes can be manipulated to produce different kinds of effects. Because of the requirement of random assignment, this design can be difficult to execute in the real world (non laboratory) setting. Quasi Experimental Research Design - This research design approximates the experimental design but does not have a control group. There is more error possible in the results. 4. SPSS (Statistical Package for Social Sciences). How SPSS (Statistical Package for the Social Sciences) Works By Arthur Griffith The developers of the Statistical Package for the Social Sciences (SPSS) made every effort to make the software easy to use. This prevents you from making mistakes or even forgetting something. That's not to say it's impossible to do something wrong, but the SPSS software works hard to keep you from running into the ditch. To foul things up, you almost have to work at figuring out a way of doing something wrong. You always begin by defining a set of variables, and then you enter data for the variables to create a number of cases. For example, if you are doing an analysis of automobiles, each car in your study would be a case. The variables that define the cases could be things such as the year of manufacture, horsepower, and cubic inches of displacement. Each car in the study is defined as a single case, and each case is defined as a set of values assigned to the collection of variables. Every case has a value for each variable. (Well, you can have a missing value, but that's a special situation described later.) Variables have types. That is, each variable is defined as containing a specific kind of number. For example, a scale variable is a numeric measurement, such as weight or miles per gallon. Acategorical variable contains values that define a category; for example, a variable named gendercould be a categorical variable defined to contain only values 1 for female and 2 for male. Things that make sense for one type of variable don't necessarily make sense for another. For example, it makes sense to calculate the average miles per gallon, but not the average gender. After your data is entered into SPSS your cases are all defined by values stored in the variables you can run an analysis. You have already finished the hard part. Running an analysis on the data is much easier than entering the data. To run an analysis, you select the one you want to run from the menu, select appropriate variables, and click the OK button. SPSS reads through all your cases, performs the analysis, and presents you with the output. You can instruct SPSS to draw graphs and charts the same way you instruct it to do an analysis. You select the desired graph from the menu, assign variables to it, and click OK. When preparing SPSS to run an analysis or draw a graph, the OK button is unavailable until you have made all the choices necessary to produce output. Not only does SPSS require that you select a sufficient number of variables to produce output, it also requires that you choose the right kinds of variables. If a categorical variable is required for a certain slot, SPSS will not allow you to choose any

other kind. Whether the output makes sense is up to you and your data, but SPSS makes certain that the choices you make can be used to produce some kind of result. All output from SPSS goes to the same place a dialog box named SPSS Viewer. It opens to display the results of whatever you've done. After you have output, if you perform some action that produces more output, the new output is displayed in the same dialog box. And almost anything you do produces output.

6. What are the different sources of Secondary information? Writing a custom term paper, research paper, or essay, students often do not know the difference between primary and secondary sources. This can lead to problems in writing research papers that require primary sources. The best way to meet the requirements of an essay or research paper is to know what type of sources are needed, which means knowing the difference between primary and secondary sources. Primary sources means that it is original article or book created by an individual or sometimes a group of people. What types of primary sources are available? It might be surprising to know that a novel is a primary source. Other types of primary sources are paintings created by the artist. If it were a photocopy of the painting, then it would be a secondary source. Some other primary sources are letters, films, short stories, plays, poems, photographs, court cases, journal articles, newspaper events, and speeches. For instance, a speech by President Bush would be a primary source. In simple terms primary sources come firsthand from the source or person. Diaries would be a primary source because it is written directly by the individual writing in the diary. Interviews are great primary sources because the individual talks about the topic directly from what he/she knows about the topic. Primary sources are usually firsthand information about something such as diaries, court records, interviews, research studies about experiments, and information that has been stated but not interpreted by others. Some examples of primary sources are e-mails and letters. They are directly written about one person. If this letter was written during World War II and analyzed by another person then it would be a secondary source. Debates, community meetings, surveys, and observations are some different primary sources. Secondary sources are sources that are written about primary sources. Secondary sources analyze, interpret, and discuss information about the primary source. If a magazine writer wrote about the speech President Bush gave on September 11th, it would be a secondary source. The information is not original, but it is an analysis of the speech. In simple terms, a secondary source writes or talks about something that is a primary source. For instance, if a person were to write about a painting hanging in the art gallery, this would be a secondary source discussing the original art. Secondary Sources include journal articles, books, encyclopedias, dictionaries, reviews, newspaper articles, specific essays, etc. Most research papers are based on secondary sources as they build on the research or studies others have done. Other types of secondary sources are reference materials, books, and CD Rom, magazines, videotapes, and television shows. Most secondary sources analyze the material or restate the works of others. Many secondary sources are used to argue someone's thesis or main points about a topic. For instance, a secondary source would use debates between the presidential candidates in their magazine article and show how one president feels about a topic the writer is discussing. Sometimes a source can be a primary source in one journal article and a secondary source in another journal article. It depends upon the relationship the writer has in the journal article. If he has been an active part of the research and he custom-writes about it then this is a primary source. If the writer writes about research done by others then this writing will be a secondary source. Primary Sources are directly taken from an individual or group of individuals, while secondary sources take information from an individual or group and analyzes the topic. Remembering this information helps in deciding whether it is a primary or secondary source. 7. What are the different techniques to be used for collecting the primary data. Statistical data as we have seen can be either primary or secondary. Primary data are those which are collected for the first time and so are in crude form. But secondary data are those which have already been collected. Primary data are always collected from the source. It is collected either by the investigator himself or through his agents. There are different methods of collecting primary data. Each method has its relative merits and demerits. The investigator has to choose a particular method to collect the information. The choice to a large extent depends on the preliminaries to data collection some of the commonly used methods are discussed below. 1. Direct Personal observation: This is a very general method of collecting primary data. Here the investigator directly contacts the informants, solicits their cooperation and enumerates the data. The information are collected by direct personal interviews. The novelty of this method is its simplicity. It is neither difficult for the enumerator nor the informants. Because both are present at the spot of data collection. This method provides most accurate information as the investigator collects them personally. But as the investigator alone is involved in the process, his personal bias may influence the accuracy of the data. So it is necessary that the investigator should be honest, unbiased and experienced. In such cases the data collected may be fairly accurate. However, the method is quite costly and time-consuming. So the method should be used when the scope of enquiry is small. 2. Indirect Oral Interviews : This is an indirect method of collecting primary data. Here information are not collected directly from the source but by interviewing persons closely related with the problem. This method is applied to apprehend culprits in case of theft, murder etc. The informations relating to one's personal life or which the informant hesitates to reveal are better collected by this method. Here the investigator prepares 'a small list of questions relating to the enquiry. The answers (information) are collected by interviewing persons well connected with the incident. The investigator should cross-examine the informants to get correct information.

This method is time saving and involves relatively less cost. The accuracy of the information largely depends upon the integrity of the investigator. It is desirable that the investigator should be experienced and capable enough to inspire and create confidence in the informant to collect accurate data. 3. Mailed Questionnaire method: This is a very commonly used method of collecting primary data. Here information are collected through a set of questionnaire. A questionnaire is a document prepared by the investigator containing a set of questions. These questions relate to the problem of enquiry directly or indirectly. Here first the questionnaires are mailed to the informants with a formal request to answer the question and send them back. For better response the investigator should bear the postal charges. The questionnaire should carry a polite note explaining the aims and objective of the enquiry, definition of various terms and concepts used there. Besides this the investigator should ensure the secrecy of the information as well as the name of the informants, if required. Success of this method greatly depends upon the way in which the questionnaire is drafted. So the investigator must be very careful while framing the questions. The questions should be (i) Short and clear (ii) Few in number (iii) Simple and intelligible (iv) Corroboratory in nature or there should be provision for cross check (v) Impersonal, non-aggressive type (vi) Simple alternative, multiple-choice or open-end type (a) In the simple alternative question type, the respondent has to choose between alternatives such as Yes or No, right or wrong etc. For example: Is Adam Smith called father of Statistics ? Yes/No, (b) In the multiple choice type, the respondent has to answer from any of the given alternatives. Example: To which sector do you belong ? (i) Primary Sector (ii) Secondary Sector (iii) Tertiary or Service Sector (c) In the Open-end or free answer questions the respondents are given complete freedom in answering the questions. The questions are like What are the defects of our educational system ? The questionnaire method is very economical in terms of time, energy and money. The method is widely used when the scope of enquiry is large. Data collected by this method are not affected by the personal bias of the investigator. However the accuracy of the information depends on the cooperation and honesty of the informants. This method can be used only if the informants are cooperative, conscious and educated. This limits the scope of the method. 4. Schedule Method: In case the informants are largely uneducated and non-responsive data cannot be collected by the mailed questionnaire method. In such cases, schedule method is used to collect data. Here the questionnaires are sent through the enumerators to collect informations. Enumerators are persons appointed by the investigator for the purpose. They directly meet the informants with the questionnaire. They explain the scope and objective of the enquiry to the informants and solicit their cooperation. The enumerators ask the questions to the informants and record their answers in the questionnaire and compile them. The success of this method depends on the sincerity and efficiency of the enumerators. So the enumerator should be sweet-tempered, good-natured, trained and well-behaved. Schedule method is widely used in extensive studies. It gives fairly correct result as the enumerators directly collect the information. The accuracy of the information depends upon the honesty of the enumerators. They should be unbiased. This method is relatively more costly and time-consuming than the mailed questionnaire method. 5. From Local Agents: Sometimes primary data are collected from local agents or correspondents. These agents are appointed by the sponsoring authorities. They are well conversant with the local conditions like language, communication, food habits, traditions etc. Being on the spot and well acquainted with the nature of the enquiry they are capable of furnishing reliable information. The accuracy of the data collected by this method depends on the honesty and sincerity of the agents. Because they actually collect the information from the spot. Information from a wide area at less cost and time can be collected by this method. The method is generally used by government agencies, newspapers, periodicals etc. to collect data. Information are like raw materials or inputs in an enquiry. The result of the enquiry basically depends on the type of information used. Primary data can be collected by employing any of the above methods. The investigator should make a rational choice of the methods to be used for collecting data. Because collection of data forms the beginning of the statistical enquiry. 8. What are the different types of data to be collected for a research? Data Collection in Marketing Research is a detailed process in which a planned search for all relevant data is made by researcher. Types of Data

1. 2.

Primary Data- Primary data is the data which is collected first hand specially for the purpose of study. It is collected for addressing the problem at hand. Thus, primary data is original data collected by researcher first hand.

Secondary data- Secondary data is the data that have been already collected by and readily available from other sources. Such data are cheaper and more quickly obtainable than the primary data and also may be available when primary data can not be obtained at all. Data Collection Methods

1.

Qualitative Research- Qualitative Research is generally undertaken to develop an initial understanding of the problem. It is non statistical in nature. It uses an inductive method, that is, data relevant to some topics are collected and grouped into appropriate meaningful categories. The explanations are emerged from the data itself. It is used in exploratory research design and descriptive research also. Qualitative data comes into a variety of forms like interview transcripts; documents, diaries and notes made while observing. There are two main methods for collecting Qualitative data a. Direct Collection Method-When the data is collected directly, it makes use of disguised method. Purpose of data collection is not known. This method makes use of-

2.

a. Focus Groups b. Depth Interview c. Case Study b. Indirect Collection-Method a. Projective Techniques Quantitative Research- Quantitative Research quantifies the data and generalizes the results from the sample to the population. In Quantitative Research, data can be colleted by two methods a. Survey Method b. Observation Method

9. What is an Experimental Design? Observations This is where you want to make some simple observations about the environment that you would like to study. You need to know what physical conditions are in the area, what the biotic and abiotic factors are, and what organisms are present. Scientists must first familiarize themselves with the environment that they wish to study. Research A little research about an area helps scientists understand what is already known about their potential research area. This can be done using the library or by talking to other people who are familiar with the area. Research can help answer some basic questions and focus your experimental question. Hypothesis Once we have a question we want to make some predictions about what we expect to happen. Here we will form an If .then. because statement. It should read: If I do this or change this, then the organisms response will be this, because of the impact of the environment. Methods and Materials This is where you spend most of your time planning out the research. There are several things that must be taken into consideration in this section. Variables this is what you want to change in your experiment. Your variables are what you want to alter in order to answer your research question. If you want to know hoe temperature affects organism than your variable in the temperature. Treatments the treatments are the different conditions that you will create in order to test your variables. You want to create a standard environment and then each treatment changes the one variable you are trying to test. Controls every experiment needs to have controls. These are the things that you do not change. These are the constants that you make sure stay the same so that any differences between your treatments can be attributed to your variables. Replicates for each treatment that you have to test your hypothesis you want to have several replicates in order to get an average answer. The goal is not to find out what could happen in extremes but what is most likely to happen in the different treatments. Data collection this is important to think about before hand. What data do you want to collect and record in order to test your hypothesis? You also need to consider how to measure your results. Will you need a ruler, a stopwatch, a scale? Materials needed once you have a plan you want to create a materials list that will include all the equipment needed in order to carry out the experiment. Results Most results are recorded in a table of some form so that recording and later reading the results is easy. What is important in a results section is turning the data collected into a form that can be easy viewed and discussed. This usually involved making graphs of some sort, and computing averages. Bar graphs, line graphs and pictograms are all useful in displaying results. Discussion One of the most important parts of the discussion is to re-address the hypothesis statement. Either you gained evidence to support you hypothesis or not. Experiments do not prove ideas, they gain evidence for theories. A theory is only proved or becomes a scientific law when tried and tested by many people over many years. You also want to address an experimental errors that may have occurred due to equipment or human errors. State what the errors were and how you would correct them the next time. Discussions should also include what you would improve on for next time and what other questions came up during the procedure. Many experiments may collect evidence for one question but many more come up during the process. The discussion also needs to include some sort of explanation and discussion of the results. You need to explain and answer the question "So What?" Why are the results you found relevant and interesting? How does this relate to the organism and the environment in which it lives? Answering so what? can be difficult but it is this discussion that can be the most interesting in research. This discussion often also leads to many new questions and research ideas to further explore ideas. 10. Define Hypothesis. Hypothesis A supposition; a proposition or principle which is supposed or taken for granted, in order to draw a conclusion or inference for proof of the point in question; something not proved, but assumed for the purpose of argument, or to account for a fact or an occurrence; as, the hypothesis that head winds detain an overdue steamer. A tentative theory or supposition provisionally adopted to explain certain facts, and to guide in the investigation of others; hence, frequently called a working hypothesis. 11. How a Hypothesis will be formulated and tested.

Most of the time, you formulate a hypothesis when performing experiments or testing theories. The point of the hypothesis is to create a testable statement that you believe may be the result of what youre testing or researching. A hypothesis doesnt have to be correct; however, it should be based on prior observations. The whole purpose of your experiment or test will be to prove or disprove your hypothesis. Instructions
1

Determine your test subject. This can be a science experiment, scientific theories, or any other project that the end result must be proven through a series of tests or research.
2 3 4

Decide exactly what youre trying to prove or disprove. For instance, will plants grow better with tap water or rain water? Determine the variables you will be testing. These variables will become part of your hypothesis. Formulate your hypothesis based on prior observations or research. If youve never had any experience with the subject before, take an educated guess.

Write your hypothesis in the "If, Then" format. Use the word related in the If portion of the statement. This creates an independent and dependent variable. For instance, if nutrients in rain water are related to plant growth, then plants watered with rain will grow better than those watered by tap water. Tips & Warnings

Your hypothesis should tell you exactly what you are testing for without excluding any other possibilities. Think of a hypothesis as a flexible opinion. Be sure your hypothesis is written so that it is obvious you are testing two variables.

12. What are the characteristics of a good research? The researcher should be a creative and highly motivated individual, a good problem solver who sees problems as challenges to be overcome rather than avoided. They need to have a good appearance, since they will be representing the company with many outside agencies. They should be able to work as a member of a team and to take direction. An minimum educational qualification of a degree in a relevant field is often required. They should also possess excellent oral and written skills, being able to communicate easily, effectively, persuasively on the phone and in writing. Proven postgraduate research experience, a multidisciplinary academic background, good visual sense, demonstrable interest in interactive multimedia and basic word-processing skills are advantageous. Relevant Effective Simple Edited Articulate Reliable Clear Helpful

13. What are the characteristics of a usable hypothesis?

1 WHAT IS AN HYPOTHESIS? An hypothesis is a preliminary or tentative explanation or postulate by the researcher of what the researcher considers the outcome of an investigation will be. It is an informed/educated guess. It indicates the expectations of the researcher regarding certain variables. It is the most specific way in which an answer to a problem can be stated. Mouton's (1990: Chapter 6) and Guy's (1987: 116) presentation of the hypothesis: Mouton: Statement postulating a possible relationship between two or more phenomena or variables. Guy: A statement describing a phenomenon or which specifies a relationship between two or more phenomena. 4 THE PURPOSE AND FUNCTION OF AN HYPOTHESIS It offers explanations for the relationships between those variables that can be empirically tested. It furnishes proof that the researcher has suffucient background knowledge to enable him/her to make suggestions in order to extend existing knowledge. It gives direction to an investigation. It structures the next phase in the investigation and therefore furnishes continuity to the examination of the problem. 5 CHARACTERISTICS OF AN HYPOTHESIS It should have elucidating power. It should strive to furnish an acceptable explanation of the phenomenon. It must be verifiable. It must be formulated in simple, understandable terms. It should corresponds with existing knowledge. 14. What are the steps to be followed when designing a questionnaire? Market research is all about reducing your business risks through the smart use of information. It is often cited that 'knowledge is power', and through market research you will have the power to discover new business opportunities, closely monitor your competitors, effectively develop products and services, and target your customers in the most cost-efficient way. However in order to get useful results you need to make sure you are asking the right questions to the right people and in the right way. The following tips are designed to help you avoid some of the common pitfalls when designing a market research questionnaire. 1. What are you trying to find out? - A good questionnaire is designed so that your results will tell you what you want to find out. - Start by writing down what you are trying to do in a few clear sentences, and design your questionnaire around this. 2. How are you going to use the information? - There is no point conducting research if the results arent going to be used make sure you know why you are asking the questions in the first place. - Make sure you cover everything you will need when it come to analysing the answers. e.g. maybe you want to compare answers given by men and women. You can only do this if youve remembered to record the gender of each respondent on each questionnaire. 3. Telephone, Postal, Web, Face-to-Face? - There are many methods used to ask questions, and each has its good and bad points. For example, postal surveys can be cheap but responses can be low and can take a long time to receive, face-to-face can be expensive but will generate the fullest responses, web surveys can be cost-effective but hit and miss on response rates, and telephone can be costly, but will often generate high response rates, give fast turnaround and will allow for probing. 4. Qualitative or Quantitative? - Do you want to focus on the number e.g. 87% of respondents thought this, or are you more interested in interpreting feedback

from respondents to bring out common themes? - The method used will generally be determined by the subject matter you are researching and the types of respondents you will be contacting. 5. Keep it short. In fact, quite often the shorter the better. - We are all busy, and as a general rule people are less likely to answer a long questionnaire than a short one. - If you are going to be asking your customers to answer your questionnaire in-store, make sure the interview is no longer than 10 minutes maximum (this will be about 10 to 15 questions). - If your questionnaire is too long, try to remove some questions. Read each question and ask, "How am I going to use this information?" If you dont know, dont include it! 6. Use simple and direct language. - The questions must be clearly understood by the respondent. The wording of a question should be simple and to the point. Do not use uncommon words or long sentences. 7. Start with something general. - Respondents will be put-off and may even refuse to complete your questionnaire if you ask questions that are too personal at the start (e.g. questions about financial matters, age, even whether or not they are married). 8. Place the most important questions in the first half of the questionnaire. - Respondents sometimes only complete part of a questionnaire. By putting the most important items near the beginning, the partially completed questionnaires will still contain important information. 9. Leave enough space to record the answers. - If you are going to include questions which may require a long answer e.g. ask someone why they do a particular thing, then make sure you leave enough room to write in the possible answers. It sounds obvious, but its so often overlooked! 10. Test your questionnaire on your colleagues. - No matter how much time and effort you put into designing your questionnaire, there is no substitute for testing it. Complete some interviews with your colleagues BEFORE you ask the real respondents. This will allow you to time your questionnaire, make any final changes, and get feedback from your colleagues.

15. How an attitude can be measure by using different research techniques. When reviewing the literature that deals with attitude change and instructional technology, it is very apparent that attitude measurement is often done very poorly. Simonson (1979a) commented on the sad state of attitude measurement in the educational technology literature, and more recent reviews have not revealed any improvements in testing methodology (Simonson & Maushak, 1995). The move to more qualitative-based research (see 40.2) and measurement has not changed this situation, and may be contributing to a decline in the quality of attitude testing (see 6. 1). Before beginning this discussion of attitude measurement, it is important once again to establish a frame of reference for this review. Attitude research is largely conducted by those called empiricists, objectivists, andreductionists. They tend to take the approach of the scientific empiricist who believes that there are laws of nature that the scientist must discover. The vast body of attitude and attitude-change literature is authored by those attempting to "discover the answer" and to determine "truth." These researchers usually apply quantitative approaches in their research designs (see 39.4). Those advocating naturalistic inquiry (see 40.2) may be uncomfortable with the approach taken by this chapter. A general question often asked by qualitative researchers, "What is going on here", does not readily translate to results of the kind summarized in this chapter and the type of measurement techniques recommended next. Certainly, it would be unwise to discount qualitative techniques for examining the critical issues of the field. Just as certainly, the vast body of literature about attitudes and attitude measurement were generated by scientists who applied quantitative approaches to measurement. Problems with attitude measurement are of three types. First, researchers are not clearly defining their attitude variables. In other words, they are not operationalizing the constructs that they are setting out to measure. This problem is heightened by the failure of many to include attitude hypotheses or research questions in their research designs. Rather, attitude constructs are often included as post-hoc components of research studies. Qualitative researchers also tend to show little interest in attitude constructs. Second, attitudes are not measured well. Certainly, quantitative measurement of attitudes has evolved into a fairly exact process (Henerson, Morris & Fitz-Gibbon, 1987). However, reports about the methods used to develop measures of attitudes are reported in only a minority of the research studies found in the literature. Simonson (1979a) reported that only 50% of the studies reviewed reported on the validation of attitude measures, and only 20% reported descriptive information about their attitude tests. Most measures then, and toddy, tended to be locally prepared and used only once-in the specific study reported. Researchers who were otherwise extremely careful to standardize their achievement measures did not do the same for their tests of attitudes. One alarming trend was the use of single items to measure attitudes. Researchers reported using a single item to determine a person's attitude (e.g., Do you like chemistry?), and then used the responses to this question in powerful statistical analyses. Apparently, reliability and validity concerns were not worrisome to these researchers. Finally, attitude measurement has tended to be of only peripheral importance to researchers. Often, as stated above, attitudes are relegated to post-hoc examinations, often conducted without controls or design considerations being taken into account. As a matter of fact, it is obvious that attitude study is not an area of interest or importance in mainstream instructional technology research. Of the hundreds of studies published in the literature of educational communications and technology since Simonson's review (1979a) of attitude research, less than 5% examined attitude variables as a major area of interest. This lack of interest was discouraging, especially when contrasted with the wealth of attitude research in the literature of social psychology. One reason attitudes may be studied so rarely is the difficulty many have in clearly identifying how attitudes should be measured. The characteristics of attitude contribute to this perception of difficulty, as does the recent move away from quantitative research procedures. In a recent review of the indexes of five textbooks dealing with methods of qualitative analysis, the term attitude was not found in any, even in the recently published Handbook of Qualitative Research (Denzin & Lincoln, 1994).

Since attitudes are defined as latent, and not observable in themselves, the educator must identify some action that would seem to be representative of the attitude in question so that this behavior might be measured as an index of the attitude. This characteristic of attitude measurement is justifiably one of the most criticized of this area of educational evaluation. However, there are several generally recognized procedures used to determine quantitatively an individual's, or group's, attitude toward some object or person. It is those procedures that are described below. Two excellent sources for information on attitude measurement should be reviewed by those interested in quantitatively testing for attitudes. First is Himmelfarb's ( Eagly & Chaiken, 1993) comprehensive review of the basic concepts and ideas behind attitude measurement. It also contains an explanation of the various techniques for quantifying attitude positions. Himmelfarb's discussion is a scholarly explanation of attitude measurement. For those interested in more specific procedures for attitude measurement, Henerson, Morris, and Fitz-Gibbon's (1987) manual is excellent. It would be unfair to call the manual a cookbook because it is more than that. It does contain step-by-step, cookbook-like, procedures for validly and reliably developing measures of attitudes. It is a must reference for those interested in quantifying attitudes as part of a research study, but who do not wish to become attitude measurement experts. Henerson, Morris, and FitzGibbon even include a section labeled "alternative approaches to collecting attitude information" designed to appeal to the qualitative researcher. 34.5.1 Characteristics of Quantitative Attitude Measurement Before procedures for measuring attitudes are discussed, there are several general characteristics of measurement that should be considered in order to determine if an evaluation technique is an effective one. Good tests have these characteristics. Basically, a quantitative approach to attitude measurement requires that measures be:

Valid. The instrument must be appropriate for what needs to be measured. In other words, a valid test measures the construct for which it is designed. A test of "attitude toward chemistry" will have items that deal directly with the concept of chemistry. Reliable. The measure should yield consistent results. In other words, if people were to take a reliable test a second time, they should obtain the same, or nearly the same, score as they got the first time they took the test, assuming no changes occurred between the two testings. Fairly simple to administer, explain, and understand. Generally, the measures that yield a single score of an attitude position epitomize the intent of this characteristic, although the single score may be deficient in meeting the intent of other characteristics of good measurement. Most tests of single attitudes have about 10 to 30 items, are valid, and have reliability estimates above.80. Replicable. Someone else should be able to use the measure with a different group, or in a different situation, to measure the same attitude. Replicable tests of attitude should be usable in a variety of situations. In other words, a test of computer anxiety should measure the existence of that construct in college students, parents, elementary schools students, and even stockbrokers.

34.5.2 Categories of Attitude Measurement Techniques There are four widely used and accepted categories, or approaches, for collecting attitude information. These approaches are:

Self-reports, where the members of a group report directly about their own attitudes. Self-reports include all procedures by which a person is asked to report on his or her own attitudes. This information can be provided orally through the use of interviews, surveys, or polls, or in written form through questionnaires, rating scales, logs, journals, or diaries. Self-reports represent the most direct type of attitude assessment and should be employed, unless the people who are being investigated are unable or unwilling to provide the necessary information. Questions like "How do you feel about XT' where X is the attitude construct under investigation are often asked in self-reports. Reports of others, where others report about the attitudes of a person or group. When the people whose attitudes are being investigated are unable or unlikely to provide accurate information, others can be questioned using interviews, questionnaires, logs, journals, reports, or observation techniques. Parents of children can be asked how their children feel about X, where X is the attitude construct under investigation. Sociometric procedures, where members of a group report about their attitudes toward one another. Sociometrics are used when the researcher desires a picture of the patterns within a group. Members of groups can be asked questions like "Who in your group fits the description of XT' where X is the attitude position being studied. Records, which are systematic accounts of regular occurrences, such as attendance reports, sign-in sheets, library checkout records, and inventories. Records are very helpful when they contain information relevant to the attitude area in question. For example, when a researcher is trying to determine if a schoolwide program to develop a higher level of school pride is working, the school's maintenance records might give an index of the program's effectiveness. If school pride is improving, then vandalism should decline, and maintenance costs should be lower. The amount of trash picked up from the school's floors might yield relevant information, too. Students who have 'school pride are less likely to throw trash on the floor.

Within each of these categories, there are strategies for measuring attitude-related behaviors. Most commonly, attitude measurement is accomplished by one of the following techniques:

Questionnaires and rating scales. Questionnaires and rating scales are instruments that present information to a respondent in writing and then require a written response, such as a check, a circle, a word, a sentence, or several sentences. Attitude rating scales are special kinds of questionnaires. They are developed according to strict procedures that ensure that responses can be summed to yield a single score representing one attitude. Questionnaires and rating scales are often used because they permit anonymity, permit the responder time to answer, can be given to many people simultaneously, provide uniformity across measurement situations, permit relatively easy data interpretation, and can be mailed or administered directly. Their main disadvantage is they do not pen-nit as much flexibility as do some other techniques. Interviews. Interviews are face-to-face meetings between two or more people in which the respondent answers questions. A survey is a highly structured interview. Often surveys are conducted over the telephone, an approximation of face-to-face interviewing. A poll is a headcount. Respondents are given a limited number of options and asked to select one. For example, word-of-mouth procedures, such as interviews, surveys, and polls, are useful because they can be read to people who cannot read or who may not understand written questions. They guarantee a relatively high response rate, they are

best for some kinds of information especially when people might change their answers if responses were written, and they are very flexible. There are two major problems with interviews. First, they are very time consuming. Second, it is Possible that the interviewer may influence the respondent.

Written reports, such as logs, journals, and diaries. Logs, journals, and diaries are descriptions of activities, experiences, and feelings written during the course of the Program. Generally they are running accounts consisting of many entries prepared on an event, on a daily or weekly basis. The main advantage of this approach is that reports Provide a wealth of information about a person's experiences and feelings. The main problem is in extracting, categorizing, and interpreting the information. Written reports require a great deal of time by both the respondent and the researcher. Observations. These procedures require that a person dedicate his or her attention to the behaviors of an individual or group in a natural setting for a certain period of time. The main advantage of this approach is its increased credibility when pretrained, disinterested, unbiased observers are used. Formal observations often bring to attention actions and attitudes that might otherwise be overlooked. Observations are extremely time consuming, and sometimes observers produce discomfort in those they are observing. The presence of an observer almost always alters what is taking place in a situation.

A specific strategy for attitude measurement should be chosen which is appropriate for the type of attitude construct of interest, the type of learner, and the situation being examined (Henerson, Morris & Fitz-Gibbon, 1987). The procedures summarized above are those most often used. Others strategies are available, but attitude researchers are cautioned to select a technique appropriate to their research questions and a technique they are competent to carry out. 34.5.3 A Recommended Process for Attitude Measurement Attempts at measurement, including the evaluation of attitude, require that a systematic process be followed. Using structured procedures increases the likelihood of an effective measurement taking place. Guidelines for attitude measurement usually recommend that at least six steps be followed (Henerson, Morris & Fitz-Gibbon, 1987): 1. Identify the construct to be measured. A construct is simply defined as the attitude area of interest. It is usually best to identify specific attitude constructs. Narrow attitude constructs such as "desire to take a course in chemistry" are probably better than "liking of chemistry," and "importance of knowing about the chemical elements" might be an even better attitude to measure. A learner can conceivably have an attitude position toward any object, situation, or person. When mediated instruction is designed, those attitudes that are important to the learning activity should be clearly identified and defined. An example of an attitude that an instructional developer might be interested in would be "attitude toward learning about titrations; by video." 2. Find an existing measure of the construct. Once a certain attitude construct has been identified, an attempt should be made to locate an instrument that will measure it. Published tests are the first choice for measuring attitudes because they have usually been tried out in other instructional situations and include some statement of test validity and reliability Additionally, instructions for administration of published tests often are available. The use of standardized measures simplifies the job of attitude evaluation. The most obvious disadvantage to using a predesigned. test is that it may not be evaluating the specific attitude being studied. Even if this is the case, it may sometimes be possible to extract valuable information from an instrument designed to test an attitude position similar to the one of specific instructional interest. Possibly the best source of published tests is the research literature. Researchers who have conducted attitude research will often have developed or identified measures of their dependent variables that can be used in new experimental situations. If the research literature does not yield an appropriate measure of an attitude construct, then published indexes of tests can be reviewed. Mental Measurements Yearbooks, and Tests in Print are general sources for tests of all kinds. Often, standardized tests, such as those listed in general indexes, can be used to provide direction to -the development of more specific attitude tests. 3. Construct an attitude measure. If no existing test of the relevant attitude is available, and a quantitative measure is needed, then it is necessary to construct a new test. Of the many types of attitude measurement possible, one widely used technique that seems to possess most of the characteristics of a good measure is the Agreement, or Likert-type, Scale. This technique involves the use of statements about the attitude that are either clearly favorable or unfavorable. Each student responds to each test item according to his or her perceived attitude "intensity" toward die statement. Often, students are asked to answer test items using a five-point scale that has responses varying in the amount of agreement to the statement from "strongly disagree" to 41strongly agree." Advantages of this technique are ease of scoring and ease of summarizing the information obtained. When a test is constructed, it is critical that validity and reliability information be collected for the measure. Of these two concepts, validity (i.e., appropriateness of instrument) is the most difficult to determine. Validity for a test depends on a number of factors, such as the type of test and its intended use. Basically, there are four categories of validity:

Construct validity. This concept refers to the extent to which the measure accurately represents the attitude construct whose name appears in its title. This can be determined by: a. Opinions of experts. Experts are asked to review the test, and their reactions to it are used to modify the test, or if they do not have negative reactions, then the test is considered valid. b. Correlations to other measures of the same construct. In some situations there may be other, often more complex, measures of the same variable that are available. Validity can be determined by asking a sample of learners to complete both the complex and the simpler versions and then correlating their scores. This procedure was used by Maurer (1983) when he validated his Computer Anxiety Index by correlating student's scores on it to Spielberger's (1970) much more complex and expensive State Anxiety Index.

c. Measures of criterion group subjects (those who have been proved to possess the construct). Maurer (1983) validated his computer anxiety index also using this technique. He observed learners and identified those who possessed the obvious

characteristics of the computer anxious person. He then examined their Computer Anxiety Index scores and determined that their Index scores were also high, indicating that it was validly measuring computer anxiety. d. Appeals to logic. Many times, particularly when the attitude can be easily defined, audiences will accept an instrument as logically related to the attitude, as long as they know it will be administered fairly.

Content validity. This refers to the representativeness of the sample of questions included in the instrument. Content validity is usually determined by careful analysis of the items in the test. There is no simple process to determine content validity other than a close, thoughtful examination of each item separately, and all items collectively. Concurrent validity. This refers to the agreement of a test with another test on the same topic that was administered at approximately the same time. Concurrent validity is determined by correlating the results of the two parallel measures of the same attitude. This correlation coefficient is reported as an index of concurrent validity. For example, if an attitude test measuring "willingness to study chemistry" was administered and scores were obtained, it could be correlated to the instructor's assessments of the "completion rate of chemistry homework assignments" in order to determine an index of concurrent validity. Predictive validity. This refers to how well a measure will predict a future behavior, determined by comparing the results of an attitude test to a measure of behavior given in the future. Ibis type of validity is usually expressed by a correlation coefficient found by comparing the results of two measures. For example, the results of an attitude test that measured "willingness to take additional chemistry courses" could be compared to actual course enrollment figures to determine the predictive validity of the attitude test.

Determining validity is not simple, however. Every educator who constructs a test of any type should be acutely aware of the need to develop valid instruments. Because there is no single, established method for determining validity, the test originator should exercise great care when constructing, administering, and interpreting tests. Reliability is the ability of a measure to produce consistent results. It is usually less difficult to determine than validity. Reliability also refers to the extent to which measurement results are free of unpredictable kinds of error. There are several methods of determining reliability that can be easily used by the attitude test developer. The "Test-Retest" method involves a second administration of the instrument to the target group and correlation of the results. The "Split-Half' method uses a random division of the instrument into two halves. Results from each half are correlated and reported as a reliability coefficient. "Alternate-Form" reliability involves the correlation of the results of two parallel forms of tests of the same attitude construct. In this method, each subject takes each form, and the resulting correlation is reported as a reliability estimate. Internal consistency reliability is a determination of how well the items of an attitude test correlate with one another. Measures of internal consistency, such as the Cronbach-alpha, are often used by attitude test developers (Ferguson, 1971). Both the Test-Retest and Alternate Form techniques will yield a score between -1.00 and +1.00. The higher the number, the more reliable the test. Reliability coefficients above .70 are considered respectable. Scores above .90 are not uncommon for standardized attitude tests. As with validity, the results of reliability estimation should be reported to the test's consumer (Anastasi, 1968; Cronbach, 1970; Talmage, 1978; Henerson et a]., 1987). 4. Conduct a pilot study. While it is possible to obtain validity and reliability data during the actual testing portion of the instructional activity, it is preferable to administer attitude instruments to a pilot audience before any formal use is undertaken. This is done to obtain appropriate data, and to uncover minor and potentially troublesome administrative problems such as misspellings, poor wording, or confusing directions. A group of learners similar to those who are the target group for the attitude test should be given the measure. Results should be used to revise the test and to determine validity and reliability information. 5. Revise tests for use. Results of pilot testing are used to revise, and refine, attitude instruments. Once problems are eliminated, the measure is ready to be used with its intended target audience. 6. Summarize, analyze, and display results. After testing is completed, the resulting data should be interpreted. Attitude test results are handled similarly to any other quantitative test information. Attitude responses should be summarized, analyzed, and displayed in such a manner that results are easily and quickly understood by others. Descriptive statistics should be reported about the attitude test results. Most often, means, standard deviations, and the range of scores should be reported. In experimental situations, tests of inference are often performed using the results of attitude tests. Most attitude test results can be analyzed using standard parametric tests, such as ttests and analysis-of-variance tests. However, attitude data about instructional method or content area are often useful even if they are only averaged and compared to other averages. In other words, did the class average change for "Attitude Toward the Happiness of People in India" after viewing the video, or did the class react favorably to "The Importance of Wearing Seat-Belts" after participating in a hypermedia computer lesson? Displaying data is another effective method of analysis. Charts, graphs, and bar diagrams are examples of data display techniques that are useful in assisting the reader in developing an understanding of what test results indicate. Whatever the process, the developer of an attitude test should make every effort to decipher the results of the measure and to explain apparent conclusions and implications derived from the test. Attitude measurement is certainly not an exciting topic, and may be of less interest than other issues discussed in this chapter. However, attitude testing specifically, and identifying attitudes generally, are apparently not understood and probably not valued by many educational technology researchers. Certainly, the trend toward more qualitative approaches to investigation may convince some that attitude measurement, and even attitude identification, are irrelevant to the important issues of the field. However, those who are still approaching research questions from an objectivist perspective will want to be sure that they are correctly following the accepted principles of measurement. 16. How a sample can be designed and what are the different techniques used for designing the sample. When conducting research, it is almost always impossible to study the entire population that you are interested in. For example, if

you were studying political views among college students in the United States, it would be nearly impossible to survey every single college student across the country. If you were to survey the entire population, it would be extremely timely and costly. As a result, researchers use samples as a way to gather data. A sample is a subset of the population being studied. It represents the larger population and is used to draw inferences about that population. It is a research technique widely used in the social sciences as a way to gather information about a population without having to measure the entire population. There are several different types and ways of choosing a sample from a population, from simple to complex. Non-probability Sampling Techniques Non-probability sampling is a sampling technique where the samples are gathered in a process that does not give all the individuals in the population equal chances of being selected. Reliance On Available Subjects. Relying on available subjects, such as stopping people on a street corner as they pass by, is one method of sampling, although it is extremely risky and comes with many cautions. This method, sometimes referred to as a convenience sample, does not allow the researcher to have any control over the representativeness of the sample. It is only justified if the researcher wants to study the characteristics of people passing by the street corner at a certain point in time or if other sampling methods are not possible. The researcher must also take caution to not use results from a convenience sample to generalize to a wider population. Purposive or Judgmental Sample. A purposive, or judgmental, sample is one that is selected based on the knowledge of a population and the purpose of the study. For example, if a researcher is studying the nature of school spirit as exhibited at a school pep rally, he or she might interview people who did not appear to be caught up in the emotions of the crowd or students who did not attend the rally at all. In this case, the researcher is using a purposive sample because those being interviewed fit a specific purpose or description. Snowball Sample. A snowball sample is appropriate to use in research when the members of a population are difficult to locate, such as homeless individuals, migrant workers, or undocumented immigrants. A snowball sample is one in which the researcher collects data on the few members of the target population he or she can locate, then asks those individuals to provide information needed to locate other members of that population whom they know. For example, if a researcher wishes to interview undocumented immigrants from Mexico, he or she might interview a few undocumented individuals that he or she knows or can locate and would then rely on those subjects to help locate more undocumented individuals. This process continues until the researcher has all the interviews he or she needs or until all contacts have been exhausted. Quota Sample. A quota sample is one in which units are selected into a sample on the basis of pre-specified characteristics so that the total sample has the same distribution of characteristics assumed to exist in the population being studied. For example, if you a researcher conducting a national quota sample, you might need to know what proportion of the population is male and what proportion is female as well as what proportions of each gender fall into different age categories, race or ethnic categories, educational categories, etc. The researcher would then collect a sample with the same proportions as the national population. Probability Sampling Techniques Probability sampling is a sampling technique where the samples are gathered in a process that gives all the individuals in the population equal chances of being selected. Simple Random Sample. The simple random sample is the basic sampling method assumed in statistical methods and computations. To collect a simple random sample, each unit of the target population is assigned a number. A set of random numbers is then generated and the units having those numbers are included in the sample. For example, lets say you have a population of 1,000 people and you wish to choose a simple random sample of 50 people. First, each person is numbered 1 through 1,000. Then, you generate a list of 50 random numbers (typically with a computer program) and those individuals assigned those numbers are the ones you include in the sample. Systematic Sample. In a systematic sample, the elements of the population are put into a list and then every kth element in the list is chosen (systematically) for inclusion in the sample. For example, if the population of study contained 2,000 students at a high school and the researcher wanted a sample of 100 students, the students would be put into list form and then every 20th student would be selected for inclusion in the sample. To ensure against any possible human bias in this method, the researcher should select the first individual at random. This is technically called a systematic sample with a random start. Stratified Sample. A stratified sample is a sampling technique in which the researcher divided the entire target population into different subgroups, or strata, and then randomly selects the final subjects proportionally from the different strata. This type of sampling is used when the researcher wants to highlight specific subgroups within the population. For example, to obtain a stratified sample of university students, the researcher would first organize the population by college class and then select appropriate numbers of freshmen, sophomores, juniors, and seniors. This ensures that the researcher has adequate amounts of subjects from each class in the final sample. Cluster Sample. Cluster sampling may be used when it is either impossible or impractical to compile an exhaustive list of the elements that make up the target population. Usually, however, the population elements are already grouped into subpopulations and lists of those subpopulations already exist or can be created. For example, lets say the target population in a study was church members in the United States. There is no list of all church members in the country. The researcher could, however, create a list of churches in the United States, choose a sample of churches, and then obtain lists of members from those churches. 17. What is the difference between Probability and Non probability sampling techniques. Non-probability sampling The difference between probability and non-probability sampling has to do with a basic assumption about the nature of the population under study. In probability sampling, every item has a chance of being selected. In non-probability sampling, there is an assumption that there is an even distribution of characteristics within the population. This is what makes the researcher believe that any sample would be representative and because of that, results will be accurate. For probability sampling, randomization is a feature of the selection process, rather than an assumption about the structure of the population.

In non-probability sampling, since elements are chosen arbitrarily, there is no way to estimate the probability of any one element being included in the sample. Also, no assurance is given that each item has a chance of being included, making it impossible either to estimate sampling variability or to identify possible bias. Reliability cannot be measured in non-probability sampling; the only way to address data quality is to compare some of the survey results with available information about the population. Still, there is no assurance that the estimates will meet an acceptable level of error. Statisticians are reluctant to use these methods because there is no way to measure the precision of the resulting sample. Despite these drawbacks, non-probability sampling methods can be useful when descriptive comments about the sample itself are desired. Secondly, they are quick, inexpensive and convenient. There are also other circumstances, such as in applied social research, when it is unfeasible or impractical to conduct probability sampling. Statistics Canada uses probability sampling for almost all of its surveys, but uses non-probability sampling for questionnaire testing and some preliminary studies during the development stage of a survey. Most non-sampling methods require some effort and organization to complete, but others, like convenience sampling, are done casually and do not need a formal plan of action. The most common types are listed below:

convenience or haphazard sampling volunteer sampling judgement sampling quota sampling

Convenience or haphazard sampling Convenience sampling is sometimes referred to as haphazard or accidental sampling. It is not normally representative of the target population because sample units are only selected if they can be accessed easily and conveniently. There are times when the average person uses convenience sampling. A food critic, for example, may try several appetizers or entrees to judge the quality and variety of a menu. And television reporters often seek so-called people-on-the-street interviews to find out how people view an issue. In both these examples, the sample is chosen randomly, without use of a specific survey method. The obvious advantage is that the method is easy to use, but that advantage is greatly offset by the presence of bias. Although useful applications of the technique are limited, it can deliver accurate results when the population is homogeneous. For example, a scientist could use this method to determine whether a lake is polluted. Assuming that the lake water is well-mixed, any sample would yield similar information. A scientist could safely draw water anywhere on the lake without fretting about whether or not the sample is representative. Examples of convenience sampling include:

the female moviegoers sitting in the first row of a movie theatre the first 100 customers to enter a department store the first three callers in a radio contest.

Volunteer sampling As the term implies, this type of sampling occurs when people volunteer their services for the study. In psychological experiments or pharmaceutical trials (drug testing), for example, it would be difficult and unethical to enlist random participants from the general public. In these instances, the sample is taken from a group of volunteers. Sometimes, the researcher offers payment to entice respondents. In exchange, the volunteers accept the possibility of a lengthy, demanding or sometimes unpleasant process. Sampling voluntary participants as opposed to the general population may introduce strong biases. Often in opinion polling, only the people who care strongly enough about the subject one way or another tend to respond. The silent majority does not typically respond, resulting in large selection bias. Television and radio media often use call-in polls to informally query an audience on their views. The Much Music television channel uses this kind of survey in their CombatZoneprogram. The program asks viewers to cast a vote for one of two music videos by telephone, e-mail or through their online website. Oftentimes, there is no limit imposed on the frequency or number of calls one respondent can make. So, unfortunately, a person might be able to vote repeatedly. It should also be noted that the people who contribute to these surveys might have different views than those who do not. Judgement sampling This approach is used when a sample is taken based on certain judgements about the overall population. The underlying assumption is that the investigator will select units that are characteristic of the population. The critical issue here is objectivity: how much can judgment be relied upon to arrive at a typical sample? Judgement sampling is subject to the researcher's biases and is perhaps even more biased than haphazard sampling. Since any preconceptions the researcher may have are reflected in the sample, large biases can be introduced if these preconceptions are inaccurate.

Statisticians often use this method in exploratory studies like pre-testing of questionnaires and focus groups. They also prefer to use this method in laboratory settings where the choice of experimental subjects (i.e., animal, human, vegetable) reflects the investigator's pre-existing beliefs about the population. One advantage of judgement sampling is the reduced cost and time involved in acquiring the sample. Quota sampling This is one of the most common forms of non-probability sampling. Sampling is done until a specific number of units (quotas) for various sub-populations have been selected. Since there are no rules as to how these quotas are to be filled, quota sampling is really a means for satisfying sample size objectives for certain sub-populations. The quotas may be based on population proportions. For example, if there are 100 men and 100 women in a population and a sample of 20 are to be drawn to participate in a cola taste challenge, you may want to divide the sample evenly between the sexes 10 men and 10 women. Quota sampling can be considered preferable to other forms of non-probability sampling (e.g., judgement sampling) because it forces the inclusion of members of different sub-populations. Quota sampling is somewhat similar to stratified sampling in that similar units are grouped together. However, it differs in how the units are selected. In probability sampling, the units are selected randomly while in quota sampling it is usually left up to the interviewer to decide who is sampled. This results in selection bias. Thus, quota sampling is often used by market researchers (particularly for telephone surveys) instead of stratified sampling, because it is relatively inexpensive and easy to administer and has the desirable property of satisfying population proportions. However, it disguises potentially significant bias. As with all other non-probability sampling methods, in order to make inferences about the population, it is necessary to assume that persons selected are similar to those not selected. Such strong assumptions are rarely valid. Example 1: The student council at Cedar Valley Public School wants to gauge student opinion on the quality of their extracurricular activities. They decide to survey 100 of 1,000 students using the grade levels (7 to 12) as the sub-population. The table below gives the number of students in each grade level. The student council wants to make sure that the percentage of students in each grade level is reflected in the sample. The formula is: Percentage of students in Grade 10 = (number of students number of students) x 100% = (150 1,000) x 100 = 15% Since 15% of the school population is in Grade 10, 15% of the sample should contain Grade 10 students. Therefore, use the following formula to calculate the number of Grade 10 students that should be included in the sample: Sample of Grade 10 students = (15% of 100) x 100 = 0.15 x 100 = 15 students The main difference between stratified sampling and quota sampling is that stratified sampling would select the students using a probability sampling method such as simple random sampling or systematic sampling. In quota sampling, no such technique is used. The 15 students might be selected by choosing the first 15 Grade 10 students to enter school on a certain day, or by choosing 15 students from the first two rows of a particular classroom. Keep in mind that those students who arrive late or sit at the back of the class may hold different opinions from those who arrived earlier or sat in the front. The main argument against quota sampling is that it does not meet the basic requirement of randomness. Some units may have no chance of selection or the chance of selection may be unknown. Therefore, the sample may be biased. It is common, but not necessary, for quota samples to use random selection procedures at the beginning stages, much in the same way as probability sampling does. For instance, the first step in multi-stage sampling would be randomly selecting the geographic areas. The difference is in the selection of the units in the final stages of the process. In multi-stage sampling, units are based on up-to-date lists for selected areas and a sample is selected according to a random process. In quota sampling, by contrast, each interviewer is instructed on how many of the respondents should be men and how many should be women, as well as how many people should represent the various age groups. The quotas are therefore calculated from available data for the population, so that the sexes, age groups or other demographic variables are represented in the correct proportions. But within each quota, interviewers may fail to secure a representative sample of respondents. For example, suppose that an organization wants to find out information about the occupations of men aged 20 to 25. An interviewer goes to a university campus and selects the first 50 men aged 20 to 25 that she comes across and who agree to participate in her organization's survey. However, this sample does not mean that these 50 men are representative of all men aged 20 to 25. Quota sampling is generally less expensive than random sampling. It is also easy to administer, especially considering the tasks of listing the whole population, randomly selecting the sample and following-up on non-respondents can be omitted from the procedure. Quota sampling is an effective sampling method when information is urgently required and can be carried out independent of existing sampling frames. In many cases where the population has no suitable frame, quota sampling may be the only appropriate sampling method.

18. Multi dimensional scaling. Multidimensional scaling (MDS) is a set of related statistical techniques often used in information visualization for exploring similarities or dissimilarities in data. MDS is a special case of ordination. An MDS algorithm starts with a matrix of item itemsimilarities, then assigns a location to each item in N-dimensional space, where N is specified a priori. For sufficiently small N, the resulting locations may be displayed in a graph or 3D visualisation. Types MDS algorithms fall into a taxonomy, depending on the meaning of the input matrix: Classical multidimensional scaling Also known as Principal Coordinates Analysis, Torgerson Scaling or TorgersonGower scaling. Takes an input matrix giving dissimilarities between pairs of items and outputs a coordinate matrix whose configuration minimizes a loss function called strain.[1] Metric multidimensional scaling A superset of classical MDS that generalizes the optimization procedure to a variety of loss functions and input matrices of known distances with weights and so on. A useful loss function in this context is called stress, which is often minimized using a procedure called stress majorization. Non-metric multidimensional scaling In contrast to metric MDS, non-metric MDS finds both a non-parametric monotonic relationship between the dissimilarities in the item-item matrix and the Euclidean distances between items, and the location of each item in the low-dimensional space. The relationship is typically found using isotonic regression.

Louis Guttman's smallest space analysis (SSA) is an example of a non-metric MDS procedure.

Generalized multidimensional scaling An extension of metric multidimensional scaling, in which the target space is an arbitrary smooth non-Euclidean space. In case when the dissimilarities are distances on a surface and the target space is another surface, GMDS allows finding the minimum-distortion embedding of one surface into another.[2]

19. Factor analysis Factor analysis is a statistical method used to describe variability among observed, correlated variables in terms of a potentially lower number of unobserved variables called factors. In other words, it is possible, for example, that variations in three or four observed variables mainly reflect the variations in fewer such unobserved variables. Factor analysis searches for such joint variations in response to unobserved latent variables. The observed variables are modeled as linear combinations of the potential factors, plus "error" terms. The information gained about the interdependencies between observed variables can be used later to reduce the set of variables in a dataset. Computationally this technique is equivalent to low rank approximation of the matrix of observed variables. Factor analysis originated in psychometrics, and is used in behavioral sciences, social sciences, marketing, product management, operations research, and other applied sciences that deal with large quantities of data. Factor analysis is related to principal component analysis (PCA), but the two are not identical. Latent variable models, including factor analysis, use regression modelling techniques to test hypotheses producing error terms, while PCA is a descriptive statistical technique[1]. There has been significant controversy in the field over the equivalence or otherwise of the two techniques (see exploratory factor analysis versus principal components analysis).[citation needed]

20. Conjoint analysis. Conjoint analysis is a statistical technique used in market research to determine how people value different features that make up an individual product or service. The objective of conjoint analysis is to determine what combination of a limited number of attributes is most influential on respondent choice or decision making. A controlled set of potential products or services is shown to respondents and by analyzing how they make preferences between these products, the implicit valuation of the individual elements making up the product or service can be determined. These implicit valuations (utilities or part-worths) can be used to create market models that estimate market share, revenue and even profitability of new designs. Conjoint originated in mathematical psychology and was developed by marketing professor Paul Green at the University of Pennsylvania and Data Chan. Other prominent conjoint analysis pioneers include professor V. Seenu Srinivasan of Stanford University who developed a linear programming (LINMAP) procedure for rank ordered data as well as a self-explicated approach, Richard Johnson (founder of Sawtooth Software) who developed the Adaptive Conjoint Analysis technique in the 1980s and Jordan Louviere (University of Iowa) who invented and developed Choice-based approaches to conjoint analysis and related techniques such as MaxDiff. Today it is used in many of the social sciences and applied sciences including marketing, product management, and operations research. It is used frequently in testing customer acceptance of new product designs, in assessing the appeal of advertisements and inservice design. It has been used in product positioning, but there are some who raise problems with this application of conjoint analysis (see disadvantages). Conjoint analysis techniques may also be referred to as multiattribute compositional modelling, discrete choice modelling, or stated preference research, and is part of a broader set of trade-off analysis tools used for systematic analysis of decisions. These tools include Brand-Price Trade-Off, Simalto, and mathematical approaches such as evolutionary algorithms or Rule Developing Experimentation. Q1 .What is "Primary Data" collection ? What are different major areas of measurement ? Statistical data as we have seen can be either primary or secondary. Primary data are those which are collected for the first time and so are in crude form. But secondary data are those which have already been collected.

Primary data are always collected from the source. It is collected either by the investigator himself or through his agents. There are different methods of collecting primary data. Each method has its relative merits and demerits. The investigator has to choose a particular method to collect the information. The choice to a large extent depends on the preliminaries to data collection some of the commonly used methods are discussed below.

1. Direct Personal observation: This is a very general method of collecting primary data. Here the investigator directly contacts the informants, solicits their cooperation and enumerates the data. The information are collected by direct personal interviews. The novelty of this method is its simplicity. It is neither difficult for the enumerator nor the informants. Because both are present at the spot of data collection. This method provides most accurate information as the investigator collects them personally. But as the investigator alone is involved in the process, his personal bias may influence the accuracy of the data. So it is necessary that the investigator should be honest, unbiased and experienced. In such cases the data collected may be fairly accurate. However, the method is quite costly and time-consuming. So the method should be used when the scope of enquiry is small. 2. Indirect Oral Interviews : This is an indirect method of collecting primary data. Here information are not collected directly from the source but by interviewing persons closely related with the problem. This method is applied to apprehend culprits in case of theft, murder etc. The informations relating to one's personal life or which the informant hesitates to reveal are better collected by this method. Here the investigator prepares 'a small list of questions relating to the enquiry. The answers (information) are collected by interviewing persons well connected with the incident. The investigator should cross-examine the informants to get correct information. This method is time saving and involves relatively less cost. The accuracy of the information largely depends upon the integrity of the investigator. It is desirable that the investigator should be experienced and capable enough to inspire and create confidence in the informant to collect accurate data. 3. Mailed Questionnaire method: This is a very commonly used method of collecting primary data. Here information are collected through a set of questionnaire. A questionnaire is a document prepared by the investigator containing a set of questions. These questions relate to the problem of enquiry directly or indirectly. Here first the questionnaires are mailed to the informants with a formal request to answer the question and send them back. For better response the investigator should bear the postal charges. The questionnaire should carry a polite note explaining the aims and objective of the enquiry, definition of various terms and concepts used there. Besides this the investigator should ensure the secrecy of the information as well as the name of the informants, if required. Success of this method greatly depends upon the way in which the questionnaire is drafted. So the investigator must be very careful while framing the questions. The questions should be (i) Short and clear (ii) Few in number (iii) Simple and intelligible (iv) Corroboratory in nature or there should be provision for cross check (v) Impersonal, non-aggressive type (vi) Simple alternative, multiple-choice or open-end type (a) In the simple alternative question type, the respondent has to choose between alternatives such as Yes or No, right or wrong etc. For example: Is Adam Smith called father of Statistics ? Yes/No, (b) In the multiple choice type, the respondent has to answer from any of the given alternatives. Example: To which sector do you belong ? (i) Primary Sector (ii) Secondary Sector (iii) Tertiary or Service Sector (c) In the Open-end or free answer questions the respondents are given complete freedom in answering the questions. The questions are like -

What are the defects of our educational

system ?

The questionnaire method is very economical in terms of time, energy and money. The method is widely used when the scope of enquiry is large. Data collected by this method are not affected by the personal bias of the investigator. However the accuracy of the information depends on the cooperation and honesty of the informants. This method can be used only if the informants are cooperative, conscious and educated. This limits the scope of the method. 4. Schedule Method: In case the informants are largely uneducated and non-responsive data cannot be collected by the mailed questionnaire method. In such cases, schedule method is used to collect data. Here the questionnaires are sent through the enumerators to collect informations. Enumerators are persons appointed by the investigator for the purpose. They directly meet the informants with the questionnaire. They explain the scope and objective of the enquiry to the informants and solicit their cooperation. The enumerators ask the questions to the informants and record their answers in the questionnaire and compile them. The success of this method depends on the sincerity and efficiency of the enumerators. So the enumerator should be sweet-tempered, good-natured, trained and well-behaved. Schedule method is widely used in extensive studies. It gives fairly correct result as the enumerators directly collect the information. The accuracy of the information depends upon the honesty of the enumerators. They should be unbiased. This method is relatively more costly and time-consuming than the mailed questionnaire method. 5. From Local Agents:

Sometimes primary data are collected from local agents or correspondents. These agents are appointed by the sponsoring authorities. They are well conversant with the local conditions like language, communication, food habits, traditions etc. Being on the spot and well acquainted with the nature of the enquiry they are capable of furnishing reliable information. The accuracy of the data collected by this method depends on the honesty and sincerity of the agents. Because they actually collect the information from the spot. Information from a wide area at less cost and time can be collected by this method. The method is generally used by government agencies, newspapers, periodicals etc. to collect data. Information are like raw materials or inputs in an enquiry. The result of the enquiry basically depends on the type of information used. Primary data can be collected by employing any of the above methods. The investigator should make a rational choice of the methods to be used for collecting data. Because collection of data forms the beginning of the statistical enquiry

Q. 2 What is observation in Primary Data Collection ? Or What are the main merits and demerits of observation methods of collecting information. METHODS OF COLLECTING PRIMARY DATA FOR RESEARCH IN BUSINESS Collecting information is the first step in the business research process, allowing you to know which course of action will improve your company's performance and services. Collecting primary data is one of the best ways to ensure that the information is credible and accurate. In this day and age, conducting primary research is easy with the different mediums that allow businessmen to freely reach to people. Following are the research methods and tips to gather primary data. What you need:

Competent communication skills Competent observation skills Desktop or laptop computer with internet access Printer with colored ink Telephone or mobile phone for making calls

First of all, you need to decide what kind of information you need to gather. Once you have decided on that, you can use the following research methods to gather such information. Interviews There are three different ways to conduct interviews, and they are:

Face-to-face interviews can be conducted by having question and answer sessions with one or more people. Ask people on the streets, go door-to-door to gather information, or make an appointment with an expert. Web-based interviews, on the other hand, make use of the internet to gather information so you will not have to the field for it. This latter method is also less costly and more convenient to use. Telephone interviews are very much similar to face-to-face interviews, but they are shorter in comparison and more structured. You may also have to send a letter to inform the interviewee in advance so they would expect your call.

Surveys and Questionnaires Both are popular means of gathering data and can reach a large number of people, but they need to be designed and reedited repeatedly to make them acceptable to people. You can either print out copies to hand them out to people or send them to your respondents through email. Though this method is relatively cheap to conduct and requires no prior arrangements, surveys and questionnaires have the risk of low response rates and some may turn out to be incomplete. Focus group interviews and consumer panels Gather a group of people, specifically from your company's target market, and have a facilitator guide them in examining a certain product and asking their opinions on said product. This method is primarily used to determine whether a company's new product or brand name will be acceptable to their target market and to the general public. Observation Observation is one of the simplest methods for primary data research and would not cost much. All you have to do is simply take note of the behavior of people towards your company's products and services. You can also try to observe how your competitors behave, and how they provide their products and services. Make sure that you are not alone in observing and have a number of colleagues to do the same thing so you can differentiate between fact and opinion. Collecting primary data maybe difficult and may take a long time to finish, but the end result is that you have the necessary information you can use to make improvements to your company's products and services. Nadeem Q. 4 What is Mail Questionnaire ? What are its advantages and limitations ? The main advantage of mail questionnaires is that a wide geographical area can be covered in the survey. They are mailed to the respondents who can complete them at their convenience in their homes and at their own pace. However the return rates of mail questionnaire are typically low. A 30% response rate is considered acceptable. Another disadvantage of the mail questionnaire is that any doubts therespondents might have cannot be clarified. Also with very low return rates it is difficult to establish the representatives of the sample because those responding to the survey may not at all represent the population they are supposed to. However some effective techniques can be employed for improving the rates of response to mail questionnaires. Sending follow-up letters, enclosing some small monetary amounts as incentives with the questionnaire providing the respondent with self-addressed, stamped return envelopes and keeping the questionnaire brief do indeed help. Mail questionnaire are also expected to meet with a better response rate when respondents are notified in advance about the forthcoming survey and a reputed research organization administers them with its own introductory cover letter. The choice of using the questionnaire as a data gathering method might be restricted if the researcher has to reach subjects with very little education. Q. 8. What is sampling? What are probability sampling and non probability sampling method? Explain with examples. How does quata sampling differ from stratified sampling. Why Sample?

When organisations require data they either use data collected by somebody else (secondary data), or collect it themselves (primary data). This is usually done by SAMPLING, that is collecting data from a representative SAMPLE of the population they are interested in. A POPULATION need not be human. In statistics we define a population as the collection of ALL the items about which we want to know some characteristics. Examples of populations are hospital patients, road accidents, pet owners, unoccupied property or bridges. It is usually far too expensive and too time consuming to collect information from every member of the population, exceptions being the General Election and The Census, so instead we collect it from a sample. The population we want to know about is called the TARGET POPULATION, as it is the one we are interested in andtargeting. Identifying the target population is not always as easy as it might appear, and once identified there are many practical difficulties. If your target population is cat owners how do you find a list of them? If it is to be of any use the sample must represent the whole of the population we are interested in, and not be biasedin any way. This is where the skill in sampling lies: in choosing a sample that will be as representative as possible. As a general rule the larger the sample, the better it is for estimating characteristics of the population. It's easier to estimate the mean height of men by measuring 50 of them rather than just 2. Although information about our sample will be of immediate interest, the point of collecting it is usually to deduce information about the entire population. In statistics this is called making INFERENCES. If such inferences are to be reliable then the sample must be truly representative of the population, i.e. free from bias. The basis for selecting any sample is the list of all the subjects from which the sample is to be chosen - this is theSAMPLING FRAME. Examples are the Postcode Address File, the Electoral register, telephone directories, membership lists, lists created by credit rating agencies and others, and maps. A problem, of course, is that the list may not be up to date. In some cases a list may not even exist. However, in practice one is constrained by TIME and CO . ST

Q.34. What is the purpose of a statistical hypothesis? Discuss null and alternative hypotheses with appropriate examples A statement about the way a random variable is distributed. an assumed statement on the probabilistic regularities obeyed by a phenomenon under study. A statistical hypothesis generally specifies the form of the probability distribution or the values of the parameters of the distribution. A hypothesis that completely specifies the distribution is said to be simple. Any nonsimple hypothesis is said to be composite and can be represented as a class of simple hypotheses. An example of a composite hypothesis is the hypothesis that the probability distribution is a normal distribution with mathematical expectation a = a0 and some unknown variance 2. The simple hypotheses here are a = a0 and 2 = , where a0 and are specified numbers. In research, a hypothesis purpose is to make a statement of the expected results of the study being conducted and makes predictions on how to or more contructs will be related. The hypothesis is there to serve as a basis for the study to pose a question that can be tested in future research. Every experiment actually has two hypotheses: the null hypothesis and the alternative hypothesis. The null hypothesis is the default position that there is no relationship between the two things being studied, and the alternative hypothesis is that there is some relationship between them. Q.36. What is unit of analysis in a research process? How is it different from Characteristics of interest? 1.Unit of Analysis One of the most important ideas in a research project is the unit of analysis. The unit of analysis is the major entity that you are analyzing in your study. For instance, any of the following could be a unit of analysis in a study: individuals groups artifacts (books, photos, newspapers) geographical units (town, census tract, state) social interactions (dyadic relations, divorces, arrests) Why is it called the 'unit of analysis' and not something else (like, the unit of sampling)? Because it is the analysis you do in your study that determines what the unit is. For instance, if you are comparing the children in two classrooms on achievement test scores, the unit is the individual child because you have a score for each child. On the other hand, if you are comparing the two classes on classroom climate, your unit of analysis is the group, in this case the classroom, because you only have a classroom climate score for the class as a whole and not for each individual student. For different analyses in the same study you may have different units of analysis. If you decide to base an analysis on student scores, the individual is the unit. But you might decide to compare average classroom performance. In this case, since the data that goes into the analysis is the average itself (and not the individuals' scores) the unit of analysis is actually the group. Even though you had data at the student level, you use aggregates in the analysis. In many areas of social research these hierarchies of analysis units have become particularly important and have spawned a whole area of statistical analysis sometimes referred to as hierarchical modeling. This is true in education, for instance, where we often compare classroom performance but collected achievement data at the individual student level. ##################################################### 2.CHARACTERISTICS OF INTEREST If you are the inspector of school buildings and responsible for the childrens health. This classification to be used in order to evaluate the effect of different building characteristics, such as age of the building, predominant building materials, and type of structural assemblies, on the occurrence of moisture damage. Moisture damage characteristics, such as location of damage, damaged structure type, and presence of mold/mold odor, were analyzed in order to assess their distribution and inter relationships. With these analyses we seek further insights of such building- and moisture damage characteristics that may be significant for causes and effects of indoor air pollution related to excessive moisture in school buildings. Indoor air quality of school buildings may be a significant factor for childrens health, as

children spend a considerable amount of time in school environments on daily basis. Excessive moisture in buildings can lead to microbial growth in building constructions and harmful emissions into indoor air . One of the most important ideas in a research project is the unit of analysis. The unit of analysis is the major entity that you are analyzing in your study. For instance, any of the following could be a unit of analysis in a study:

individuals groups artifacts (books, photos, newspapers) geographical units (town, census tract, state) social interactions (dyadic relations, divorces, arrests)

Why is it called the 'unit of analysis' and not something else (like, the unit of sampling)? Because it is the analysis you do in your study that determines what the unit is. For instance, if you are comparing the children in two classrooms on achievement test scores, the unit is the individual child because you have a score for each child. On the other hand, if you are comparing the two classes on classroom climate, your unit of analysis is the group, in this case the classroom, because you only have a classroom climate score for the class as a whole and not for each individual student. For different analyses in the same study you may have different units of analysis. If you decide to base an analysis on student scores, the individual is the unit. But you might decide to compare average classroom performance. In this case, since the data that goes into the analysis is the average itself (and not the individuals' scores) the unit of analysis is actually the group. Even though you had data at the student level, you use aggregates in the analysis. In many areas of social research these hierarchies of analysis units have become particularly important and have spawned a whole area of statistical analysis sometimes referred to as hierarchical modeling. This is true in education, for instance, where we often compare classroom performance but collected achievement data at the individual student level.

Q.38. What are the most commonly used scales for measurement of attitudes in a research? Your answer should be supported by example for each scale This chapter will give the reader: An understanding of the four levels of measurement that can be taken by researchers The ability to distinguish between comparative and non-comparative measurement scales, and A basic tool-kit of scales that can be used for the purposes of marketing research. Structure Of The Chapter All measurements must take one of four forms and these are described in the opening section of the chapter. After the properties of the four categories of scale have been explained, various forms of comparative and non-comparative scales are illustrated. Some of these scales are numeric, others are semantic and yet others take a graphical form. The marketing researcher who is familiar with the complete tool kit of scaling measurements is better equipped to understand markets. Levels of measurement Most texts on marketing research explain the four levels of measurement: nominal, ordinal, interval and ratio and so the treatment given to them here will be brief. However, it is an important topic since the type of scale used in taking measurements directly impinges on the statistical techniques which can legitimately be used in the analysis. Nominal scales This, the crudest of measurement scales, classifies individuals, companies, products, brands or other entities into categories where no order is implied. Indeed it is often referred to as a categorical scale. It is a system of classification and does not place the entity along a continuum. It involves a simply count of the frequency of the cases assigned to the various categories, and if desired numbers can be nominally assigned to label each category as in the example below: Figure 3.1 An example of a nominal scale Which of the following food items do you tend to buy at least once per month? (Please tick) Okra Palm Oil Milled Rice Peppers Prawns Pasteurised milk The numbers have no arithmetic properties and act only as labels. The only measure of average which can be used is the mode because this is simply a set of frequency counts. Hypothesis tests can be carried out on data collected in the nominal form. The most likely would be the Chi-square test. However, it should be noted that the Chi-square is a test to determine whether two or more variables are associated and the strength of that relationship. It can tell nothing about the form of that relationship, where it exists, i.e. it is not capable of establishing cause and effect. Ordinal scales Ordinal scales involve the ranking of individuals, attitudes or items along the continuum of the characteristic being scaled. For example, if a researcher asked farmers to rank 5 brands of pesticide in order of preference he/she might obtain responses like those in table 3.2 below.

Figure 3.2 An example of an ordinal scale used to determine farmers' preferences among 5 brands of pesticide. Order of preference 1 2 3 4 5 Brand Rambo R.I.P. Killalot D.O.A. Bugdea th

From such a table the researcher knows the order of preference but nothing about how much more one brand is preferred to another, that is there is no information about the interval between any two brands. All of the information a nominal scale would have given is available from an ordinal scale. In addition, positional statistics such as the median, quartile and percentile can be determined. It is possible to test for order correlation with ranked data. The two main methods are Spearman's Ranked Correlation Coefficient and Kendall's Coefficient of Concordance. Using either procedure one can, for example, ascertain the degree to which two or more survey respondents agree in their ranking of a set of items. Consider again the ranking of pesticides example in figure 3.2. The researcher might wish to measure similarities and differences in the rankings of pesticide brands according to whether the respondents' farm enterprises were classified as "arable" or "mixed" (a combination of crops and livestock). The resultant coefficient takes a value in the range 0 to 1. A zero would mean that there was no agreement between the two groups, and 1 would indicate total agreement. It is more likely that an answer somewhere between these two extremes would be found. The only other permissible hypothesis testing procedures are the runs test and sign test. The runs test (also known as the WaldWolfowitz). Test is used to determine whether a sequence of binomial data - meaning it can take only one of two possible values e.g. African/non-African, yes/no, male/female - is random or contains systematic 'runs' of one or other value. Sign tests are employed when the objective is to determine whether there is a significant difference between matched pairs of data. The sign test tells the analyst if the number of positive differences in ranking is approximately equal to the number of negative rankings, in which case the distribution of rankings is random, i.e. apparent differences are not significant. The test takes into account only the direction of differences and ignores their magnitude and hence it is compatible with ordinal data. Interval scales It is only with an interval scaled data that researchers can justify the use of the arithmetic mean as the measure of average. The interval or cardinal scale has equal units of measurement, thus making it possible to interpret not only the order of scale scores but also the distance between them. However, it must be recognised that the zero point on an interval scale is arbitrary and is not a true zero. This of course has implications for the type of data manipulation and analysis we can carry out on data collected in this form. It is possible to add or subtract a constant to all of the scale values without affecting the form of the scale but one cannot multiply or divide the values. It can be said that two respondents with scale positions 1 and 2 are as far apart as two respondents with scale positions 4 and 5, but not that a person with score 10 feels twice as strongly as one with score 5. Temperature is interval scaled, being measured either in Centigrade or Fahrenheit. We cannot speak of 50F being twice as hot as 25F since the corresponding temperatures on the centigrade scale, 10C and -3.9C, are not in the ratio 2:1. Interval scales may be either numeric or semantic. Study the examples below in figure 3.3. Figure 3.3 Examples of interval scales in numeric and semantic formats Please indicate your views on Balkan Olives by scoring them on a scale of 5 down to 1 (i.e. 5 = Excellent; = Poor) on each of the criteria listed Balkan Olives are: Circle the appropriate score on each line Succulence 5 4 3 2 1 Fresh tasting 5 4 3 2 1 Free of skin blemish 5 4 3 2 1 Good value 5 4 3 2 1 Attractively packaged 5 4 3 2 1 (a) Please indicate your views on Balkan Olives by ticking the appropriate responses below: Excellent Very Good Good Fair Poor Succulent Freshness Freedom from skin blemish Value for money Attractiveness of packaging (b) Most of the common statistical methods of analysis require only interval scales in order that they might be used. These are not recounted here because they are so common and can be found in virtually all basic texts on statistics. Ratio scales The highest level of measurement is a ratio scale. This has the properties of an interval scale together with a fixed origin or zero point. Examples of variables which are ratio scaled include weights, lengths and times. Ratio scales permit the researcher to

compare both differences in scores and the relative magnitude of scores. For instance the difference between 5 and 10 minutes is the same as that between 10 and 15 minutes, and 10 minutes is twice as long as 5 minutes. Given that sociological and management research seldom aspires beyond the interval level of measurement, it is not proposed that particular attention be given to this level of analysis. Suffice it to say that virtually all statistical operations can be performed on ratio scales. Measurement scales The various types of scales used in marketing research fall into two broad categories: comparative and non comparative. In comparative scaling, the respondent is asked to compare one brand or product against another. With noncomparative scaling respondents need only evaluate a single product or brand. Their evaluation is independent of the other product and/or brands which the marketing researcher is studying. Noncomparative scaling is frequently referred to as monadic scaling and this is the more widely used type of scale in commercial marketing research studies. Q.41. Computer graphics will have on impact on the research report writing format. Elaborate this in the light of computer application of report writing. The term computer graphics includes almost everything on computers that is not text or sound. Today almost every computer can do some graphics, and people have even come to expect to control their computer through icons and pictures rather than just by typing. Here in our lab at the Program of Computer Graphics, we think of computer graphics as drawing pictures on computers, also called rendering. The pictures can be photographs, drawings, movies, or simulations -- pictures of things which do not yet exist and maybe could never exist. Or they may be pictures from places we cannot see directly, such as medical images from inside your body. We spend much of our time improving the way computer pictures can simulate real world scenes. We want images on computers to not just look more realistic, but also to BE more realistic in their colors, the way objects and rooms are lighted, and the way different materials appear. We call this work "realistic image synthesis", and the following series of pictures will show some of our techniques in stages from very simple pictures through very realistic ones.

You might also like