You are on page 1of 29

WRITING A LITERATURE REVIEW

What is a literature review? Why do a literature review? How many texts? Writing the review Annotated bibliography

WHAT IS A LITERATURE REVIEW?


A literature review is a description of the literature relevant to a particular field or topic. This is often written as part of a postgraduate thesis proposal, or at the commencement of a thesis. A critical literature review is a critical assessment of the relevant literature. It is unlikely that you will be able to write a truly critical assessment of the literature until you have a good grasp of the subject, usually at some point near the end of your thesis.

How does a literature review differ from other assignments?


The review, like other forms of expository writing, has an introduction, body and conclusion, well-formed paragraphs, and a logical structure. However, in other kinds of expository writing, you use relevant literature to support the discussion of your thesis; in a literature review, the literature itself is the subject of discussion.

What counts as 'literature'?


Literature covers everything relevant that is written on a topic: books, journal articles, newspaper articles, historical records, government reports, theses and dissertations, etc. The important word is 'relevant'. Check with your supervisor or tutor when in doubt.

WHY DO A LITERATURE REVIEW?


A literature review gives an overview of the field of inquiry: what has already been said on the topic, who the key writers are, what the prevailing theories and hypotheses are, what questions are being asked, and what methodologies and methods are appropriate and useful. A critical literature review shows how prevailing ideas fit into your own thesis, and how your thesis agrees or differs from them.

HOW MANY REFERENCES TO LOOK FOR?


This depends on what the literature review is for, and what stage you are at in your studies. Your supervisor or tutor should specify a minimum number of references. Generally speaking, a reasonable number of references in a literature review would be:

undergraduate review: 5-20 titles depending on level. Honours dissertation: 20+ titles. Masters thesis: 40+ titles Doctoral thesis: 50+ titles.

HOW TO WRITE A LITERATURE REVIEW


1. The literature search
Find out what has been written on your subject. Use as many bibliographical sources as you can to find relevant titles. The following are likely sources:

Bibliographies and references in key textbooks and recent journal articles. Your supervisor or tutor should tell you which are the key texts and relevant journals. Abstracting databases, such as PsycINFO, Medline, etc. Citation databases, such as Web of Science, Scopus. Many abstracting journals and electronic databases are available through the University Library'sResearch Gateway. A useful reference book for information searches: Lane, Nancy D 1996. Techniques for Student Research: A Practical Guide. Second edition. Melbourne: Longman (UC library call number Z 711.2 L36). Using the specialist librarians The University Library has three specialist librarians, one for each Faculty. They can help you decide which databases and bibliographies are relevant to your field, and can advise you on other sources for your literature search. Use them!

2. Noting the bibliographical details


Write down the full bibliographical details of each book or article as soon as you find a reference to it. This will save you an enormous amount of time later on.

3. Finding the literature


Once you have what looks like a list of relevant texts, you have to find them.

Use the UC library catalogue to see if the books and journals are held at UC. For ejournals, look at the A-Z listing. For books and journals, you can use the UC library pages to search other Canberra library catalogues (including the National Library). For journals, use the LibrariesAustralia, http://librariesaustralia.nla.gov.au/ catalogue to see which libraries in Australia (including government department libraries and other specialist libraries) hold the journals you are looking for. If the book or journal you want is not held in Canberra, you may be able to access it through interlibrary loans. Check with your supervisor to see if this facility is available to you. (Someone has to pay for inter-library loans!) The full text of many journal articles can be found on electronic databases such as Business Source Complete, IEEE Xplore, ScienceDirect.

4. Reading the literature


Before you begin to read a book or article, make sure you written down the full details (see note bibliographical 2 above).

Take notes as you read the literature. You are reading to find out how each piece of writing approaches the subject of your research, what it has to say about it, and (especially for research students) how it relates to your own thesis:

Is it a general textbook or does it deal with a specific issue(s)? Is it an empirical report, a theoretical study, a sociological or political account, a historical overview, etc? All or some of these? Does it follow a particular school of thought? What is its theoretical basis? What definitions does it use? What is its general methodological approach? What methods are used? What kinds of data does it use to back up its argument? What conclusions does it come to? Other questions may be relevant. It depends on the purpose of the review. Usually, you wont have to read the whole text from first to last page. Learn to use efficient scanning and skimming reading techniques.

5. Writing the review


Having gathered the relevant details about the literature, you now need to write the review. The kind of review you write, and the amount of detail, will depend on the level of your studies. Important note: do not confuse a literature review with an annotated bibliography. An annotated bibliography deals with each text in turn, describing and evaluating the text, using one paragraph for each text. In contrast, a literature review synthesises many texts in one paragraph. Each paragraph (or section if it is a long thesis) of the literature review should classify and evaluate the themes of the texts that are relevant to your thesis; each paragraph or section of your review should deal with a different aspect of the literature. Like all academic writing, a literature review must have an introduction, body, and conclusion. The introduction should include: the nature of the topic under discussion (the topic of your thesis) the parameters of the topic (what does it include and exclude)? the basis for your selection of the literature The conclusion should include: A summary of major agreements and disagreements in the literature A summary of general conclusions that are being drawn. A summary of where your thesis sits in the literature (Remember! Your thesis could become one of the future texts on the subjecthow will later research students describe your thesis in their literature reviews?) The body paragraphs could include relevant paragraphs on: historical background, including classic texts; current mainstream versus alternative theoretical or ideological viewpoints, including differing theoretical assumptions, differing political outlooks, and other conflicts; possible approaches to the subject (empirical, philosophical, historical, postmodernist, etc); definitions in use;

current research studies; current discoveries about the topic; principal questions that are being asked; general conclusions that are being drawn; methodologies and methods in use; and so on.

WHAT IS A LITERATURE REVIEW?


Many students are instructed, as part of their research program, to perform a literature review, without always understanding what a literature review is.
by Martyn Shuttleworth (2009)

Most are aware that it is a process of gathering information from other sources and documenting it, but few have any idea of how to evaluate the information, or how to present it. A literature review can be a precursor in the introduction of a research paper, or it can be an entire paper in itself, often the first stage of large research projects, allowing the supervisor to ascertain that the student is on the correct path. A literature review is a critical and in depth evaluation of previous research. It is a summary and synopsis of a particular area of research, allowing anybody reading the paper to establish why you are pursuing this particular research program. A good literature review expands upon the reasons behind selecting a particular research question.

WHAT IS A LITERATURE REVIEW NOT?


It is not a chronological catalog of all of the sources, but an evaluation, integrating the previous research together, and also explaining how it integrates into the proposed research program. All sides of an argument must be clearly explained, to avoid bias, and areas of agreement and disagreement should be highlighted. It is not a collection of quotes and paraphrasing from other sources. A good literature review should also have some evaluation of the quality and findings of the research. A good literature review should avoid the temptation of impressing the importance of a particular research program. The fact that a researcher is undertaking the research program speaks for its importance, and an educated reader may well be insulted that they are not

allowed to judge the importance for themselves. They want to be re-assured that it is a serious paper, not a pseudo-scientific sales advertisement. Whilst some literature reviews can be presented in a chronological order, it is best avoided. For example, a review of Victorian Age Physics, could present J.J. Thomsons famous experiments in a chronological order. Otherwise, this is usually perceived as being a little lazy, and it is better to organize the review around ideas and individual points. As a general rule, certainly for a longer review, each paragraph should address one point, and present and evaluate all of the evidence, from all of the differing points of view.

CONDUCTING A LITERATURE REVIEW


Evaluating the credibility of sources is one of the most difficult aspects, especially with the ease of finding information on the internet. The only real way to evaluate is through experience, but there are a few tricks for evaluating information quickly, yet accurately. There is such a thing as too much information, and Google does not distinguish or judge the quality of results, only how search engine friendly a paper is. This is why it is still good practice to begin research in an academic library. Any journals found there can be regarded as safe and credible. The next stage is to use the internet, and this is where the difficulties start. It is very difficult to judge the credibility of an online paper. The main thing is to structure the internet research as if it were on paper. Bookmark papers, which may be relevant, in one folder and make another subfolder for a shortlist.

The easiest way is to scan the work, using the abstract and introduction as guides. This helps to eliminate the non-relevant work and also some of the lower quality research. If it sets off alarm bells, there may be something wrong, and the paper is probably of a low quality. Be very careful not to fall into the trap of rejecting research just because it conflicts with your hypothesis. Failure to do this will completely invalidate the literature review and potentially undermine the research project. Any research that may be relevant should be moved to the shortlist folder.

The next stage is to critically evaluate the paper and decide if the research is sufficient quality. Think about it this way: The temptation is to try to include as many sources as possible, because it is easy to fall into the trap of thinking that a

long bibliography equates to a good paper. A smaller number of quality sources is far preferable than a long list of irrelevance.

Check into the credentials of any source upon which you rely heavily for the literature review. The reputation of the University or organization is a factor, as is the experience of the researcher. If their name keeps cropping up, and they have written many papers, the source is usually OK.

Look for agreements. Good research should have been replicated by other independent researchers, with similar results, showing that the information is usually fairly safe to use. If the process is proving to be difficult, and in some fields, like medicine and environmental research, there is a lot of poor science, do not be afraid to ask a supervisor for a few tips. They should know some good and reputable sources to look at. It may be a little extra work for them, but there will be even more work if they have to tear apart a review because it is built upon shaky evidence.

Conducting a good literature review is a matter of experience, and even the best scientists have fallen into the trap of using poor evidence. This is not a problem, and is part of the scientific process; if a research program is well constructed, it will not affect the results.

Read more: http://www.experiment-resources.com/what-is-a-literaturereview.html#ixzz1zmNw6soh

EXPERIMENTAL RESEARCH
Experimental research is commonly used in sciences such as sociology and psychology, physics, chemistry, biology and medicine etc.
by Experiment-Resources.com (2008)

It is a collection of research designs which use manipulation and controlled testing to understand causal processes. Generally, one or more variables are manipulated to determine their effect on a dependent variable. The experimental method is a systematic and scientific approach to research in which the researcher manipulates one or more variables, and controls and measures any change in other variables.

Experimental Research is often used where: 1. There is time priority in a causal relationship (cause precedes effect) 2. There is consistency in a causal relationship (a cause will always lead to the same effect) 3. The magnitude of the correlation is great.

(Reference: en.wikipedia.org)
The word experimental research has a range of definitions. In the strict sense, experimental research is what we call a true experiment. This is an experiment where the researcher manipulates one variable, and control/randomizes the rest of the variables. It has a control group, the subjects have been randomly assigned between the groups, and the researcher only tests one effect at a time. It is also important to know what variable(s) you want to test and measure. A very wide definition of experimental research, or a quasi experiment, is research where the scientist actively influences something to observe the consequences. Most experiments tend to fall in between the strict and the wide definition. A rule of thumb is that physical sciences, such as physics, chemistry and geology tend to define experiments more narrowly than social sciences, such as sociology and psychology, which conduct experiments closer to the wider definition.

AIMS OF EXPERIMENTAL RESEARCH


Experiments are conducted to be able to predict phenomenons. Typically, an experiment is constructed to be able to explain some kind of causation. Experimental research is important to society - it helps us to improve our everyday lives.

IDENTIFYING THE RESEARCH PROBLEM


After deciding the topic of interest, the researcher tries to define the research problem. This helps the researcher to focus on a more narrow research area to be able to study it appropriately. The research problem is often operationalizationed, to define how to measure the research problem. The results will depend on the exact measurements that the researcher chooses and may be operationalized differently in another study to test the main conclusions of the study. Defining the research problem helps you to formulate a research hypothesis, which is tested against the null hypothesis. Conceptual variables are often expressed in general, theoretical, qualitative, or subjective terms and important in hypothesis building process. An ad hoc analysis is a hypothesis invented after testing is done, to try to explain why the contrary evidence. A poor ad hoc analysis may be seen as the researcher's inability to accept

that his/her hypothesis is wrong, while a great ad hoc analysis may lead to more testing and possibly a significant discovery.

CONSTRUCTING THE EXPERIMENT


There are various aspects to remember when constructing an experiment. Planning ahead ensures that the experiment is carried out properly and that the results reflect the real world, in the best possible way.

SAMPLING GROUPS TO STUDY


Sampling groups correctly is especially important when we have more than one condition in the experiment. One sample group often serves as a control group, whilst others are tested under the experimental conditions. Deciding the sample groups can be done in by using a many different of sampling such techniques. Population sampling may chosen number methods,

as randomization, "quasi-randomization" and pairing. Reducing sampling errors is vital for getting valid results from experiments. Researchers often adjust the sample size to minimize chances of random errors. Here are some common sampling techniques:

probability sampling non-probability sampling simple random sampling convenience sampling stratified sampling systematic sampling cluster sampling sequential sampling disproportional sampling judgmental sampling snowball sampling quota sampling

CREATING THE DESIGN


The research design is chosen based on a range of factors. Important factors when choosing the design are feasibility, time, cost, ethics, measurement problems and what you would like to test. The design of the experiment is critical for the validity of the results.

TYPICAL DESIGNS AND FEATURES IN EXPERIMENTAL DESIGN

Pretest-Posttest of the manipulation. Pretests sometimes influence the effect.

Design

Check whether the groups are different before the manipulation starts and the effect

Control

Group

Control groups are designed to measure research bias and measurement effects, such as the Hawthorne Effect or the Placebo Effect. A control group is a group not receiving the same manipulation as the experimental group. Experiments frequently have 2 conditions, but rarely more than 3 conditions at the same time.

Randomized

Controlled

Trials

Randomized Sampling, comparison between an Experimental Group and a Control Group and strict control/randomization of all other variables

Solomon

Four-Group

Design

With two control groups and two experimental groups. Half the groups have a pretest and half do not have a pretest. This to test both the effect itself and the effect of the pretest.

Between Within Design

Subjects Subject

Design Design

Grouping Participants to Different Conditions Participants Take Part in the Different Conditions - See also: Repeated Measures

Counterbalanced Matched Double-Blind

Measures Subjects

Design Design Experiment

Testing the effect of the order of treatments when no control group is available/ethical Matching Participants to Create Similar Experimental- and Control-Groups Neither the researcher, nor the participants, know which is the control group. The results can be affected if the researcher or participants know this.

Bayesian

Probability

Using bayesian probability to "interact" with participants is a more "advanced" experimental design. It can be used for settings were there are many variables which are hard to isolate. The researcher starts with a set of initial beliefs, and tries to adjust them to how participants have responded

PILOT STUDY
It may be wise to first conduct a pilot-study or two before you do the real experiment. This ensures that the experiment measures what it should, and that everything is set up right.

Minor errors, which could potentially destroy the experiment, are often found during this process. With a pilot study, you can get information about errors and problems, and improve the design, before putting a lot of effort into the real experiment. If the experiments involve humans, a common strategy is to first have a pilot study with someone involved in the research, but not too closely, and then arrange a pilot with a person who resembles the subject(s). Those two different pilots are likely to give the researcher good information about any problems in the experiment.

CONDUCTING THE EXPERIMENT


An experiment is typically carried out by manipulating a variable, called the independent variable, affecting the experimental group. The effect that the researcher is interested in, the dependent variable(s), is measured. Identifying and controlling non-experimental factors which the researcher does not want to influence the effects, is crucial to drawing a valid conclusion. This is often done by controlling variables, if possible, or randomizing variables to minimize effects that can be traced back to third variables. Researchers only want to measure the effect of the independent variable(s) when conducting an experiment, allowing them to conclude that this was the reason for the effect.

ANALYSIS AND CONCLUSIONS


In quantitative research, the amount of data measured can be enormous. Data not prepared to be analyzed is called "raw data". The raw data is often summarized as something called "output data", which typically consists of one line per subject (or item). A cell of the output data is, for example, an average of an effect in many trials for a subject. The output data is used for statistical analysis, e.g. significance tests, to see if there really is an effect. The aim of an analysis is to draw a conclusion, together with other observations. The researcher might generalize the results to a wider phenomenon, if there is no indication of confounding variables "polluting" the results. If the researcher suspects that the effect stems from a different variable than the independent variable, further investigation is needed to gauge the validity of the results. An experiment is often conducted because the scientist wants to know if the independent variable is having any effect upon the dependent variable. Variables correlating are not proof that there is causation. Experiments are more often of quantitative nature than qualitative nature, although it happens.

EXAMPLES OF EXPERIMENTS
This website contains many examples of experiments. Some are not true experiments, but involve some kind of manipulation to investigate a phenomenon. Others fulfil most or all criteria of true experiments. Here are some examples of scientific experiments:

SOCIAL PSYCHOLOGY

Stanley Milgram Experiment - Will people obey orders, even if clearly dangerous? Asch Experiment - Will people conform to group behavior? Stanford Prison Experiment - How do people react to roles? Will you behave differently? Good Samaritan Experiment - Would You Help a Stranger? - Explaining Helping Behavior

GENETICS

Law Of Segregation - The Mendel Pea Plant Experiment Transforming Principle - Griffith's Experiment about Genetics

PHYSICS

Ben Franklin Kite Experiment - Struck by Lightning J J Thomson Cathode Ray Experiment

Read more: http://www.experiment-resources.com/experimentalresearch.html#ixzz1zmOK9SP2

RESEARCH DESIGNS

Different types of research designs have different advantages and disadvantages.

by Experiment-Resources.com (2008)

The design is the structure of any scientific work. It gives direction and systematizes the research. The method you choose will affect your results and how you conclude the findings. Most scientists are interested in getting reliable observations that can help the understanding of a phenomenon. There are two main approaches to a research problem:

Quantitative Research Qualitative Research

What are the difference between Qualitative and Quantitative Research?

DIFFERENT RESEARCH METHODS


There are various designs which are used in research, all with specific advantages and disadvantages. Which one the scientist uses, depends on the aims of the study and the nature of the phenomenon:

Descriptive Designs
Aim: Observe and Describe

Descriptive Research Case Study Naturalistic Observation Survey (The Questionnaire is also a technique used in many types of research designs)

Correlational Studies
Aim: Predict

Case Control Study Observational Study Cohort Study Longitudinal Study Cross Sectional Study Correlational Studies in general

Semi-Experimental Designs

Aim: Determine Causes


Field Experiment Quasi-Experimental Design Twin Studies

Experimental Designs
Aim: Determine Causes

True Experimental Design Double-Blind Experiment

Reviewing Other Research


Aim: Explain

Literature Review Meta-analysis Systematic Reviews

Test Study Before Conducting a Full-Scale Study


Aim: Does the Design Work?

Pilot Study

TYPICAL EXPERIMENTAL DESIGNS


SIMPLE EXPERIMENTAL TECHNIQUES

Pretest-Posttest Design Control Group Randomization Randomized Controlled Trials Between Subjects Design Within Subject Design

COMPLEX EXPERIMENTAL DESIGNS


Factorial Design Solomon Four-Group Design

Repeated Measures Design Counterbalanced Measures Design Matched Subjects Design Bayesian Probability

WHICH METHOD TO CHOOSE?


What design you choose depends on different factors.

What information do you want? The aims of the study. The nature of the phenomenon - Is it feasible to collect the data, and if so, would it be valid/reliable? How reliable should the information be? Is it ethical to conduct the study? The cost of the design Is there little or much current scientific theory and literature on the topic?

FURTHER READING

"Research Design: Qualitative, Quantitative, and Mixed Methods Approaches" by John W. Creswell "Essentials of Research Design and Methodology" by Geoffrey R Marczyk

Read more: http://www.experiment-resources.com/research-designs.html#ixzz1zmOgqIJz

QUANTITATIVE RESEARCH DESIGN


Quantitative research design is the standard experimental method of most scientific disciplines.
by Martyn Shuttleworth (2008)

These experiments are sometimes referred to as true science, and use traditional mathematical and statistical means to measure results conclusively. They are most commonly used by physical scientists, although social sciences, education and economics have been known to use this type of research. It is the opposite of qualitative research.

Quantitative experiments all use a standard format, with a few minor inter-disciplinary differences, of generating a hypothesis to be proved or disproved. This hypothesis must be provable by mathematical and statistical means, and is the basis around which the whole experiment is designed. Randomization of any study groups is essential, and a control group should be included, wherever possible. A sound quantitative design should only manipulate one variable at a time, or statistical analysis becomes cumbersome and open to question. Ideally, the research should be constructed in a manner that allows others to repeat the experiment and obtain similar results. When to perform the quantitative research design.

ADVANTAGES
Quantitative research design is an excellent way of finalizing results and proving or disproving a hypothesis. The structure has not changed for centuries, so is standard across many scientific fields and disciplines. After statistical analysis of the results, a comprehensive answer is reached, and the results can be legitimately discussed and published. Quantitative experiments also filter out external factors, if properly designed, and so the results gained can be seen as real and unbiased. Quantitative experiments are useful for testing the results gained by a series of qualitative experiments, leading to a final answer, and a narrowing down of possible directions for follow up research to take.

DISADVANTAGES
Quantitative experiments can be difficult and expensive and require a lot of time to perform. They must be carefully planned to ensure that there is complete randomization and correct designation of control groups. Quantitative studies usually require extensive statistical analysis, which can be difficult, due to most scientists not being statisticians. The field of statistical study is a whole scientific discipline and can be difficult for non-mathematicians In addition, the requirements for the successful statistical confirmation of results are very stringent, with very few experiments comprehensively proving a hypothesis; there is usually some ambiguity, which requires retesting and refinement to the design. This means another investment of time and resources must be committed to fine-tune the results.

Quantitative research design also tends to generate only proved or unproven results, with there being very little room for grey areas and uncertainty. For the social sciences, education, anthropology and psychology, human nature is a lot more complex than just a simple yes or no response.

Read more: http://www.experiment-resources.com/quantitative-researchdesign.html#ixzz1zmP1L2bn

QUALITATIVE RESEARCH DESIGN


Qualitative research design is a research method used extensively by scientists and researchers studying human behavior and habits.
by Martyn Shuttleworth (2008)

It is also very useful for product designers who want to make a product that will sell. For example, a designer generating some ideas for a new product might want to study peoples habits and preferences, to make sure that the product is commercially viable. Quantitative research is then used to assess whether the completed design is popular or not. Qualitative research is often regarded as a precursor to quantitative research, in that it is often used to generate possible leads and ideas which can be used to formulate a realistic and testable hypothesis. This hypothesis can then be comprehensively tested and mathematically analyzed, with standard quantitative research methods. For these reasons, these qualitative methods are often closely allied with interviews, survey design techniques and individual case studies, as a way to reinforce and evaluate findings over a broader scale. A study completed before the experiment was performed would reveal which of the multitude of brands were the most popular. The quantitative experiment could then be constructed around only these brands, saving a lot of time, money and resources. Qualitative methods are probably the oldest of all scientific techniques, with Ancient Greek philosophers qualitatively observing the world around them and trying to come up with answers which explained what they saw.

DESIGN

The design of qualitative research is probably the most flexible of the various experimental techniques, encompassing a variety of accepted methods and structures. From an individual case study to an extensive interview, this type of study still needs to be carefully constructed and designed, but there is no standardized structure. Case studies, interviews and survey designs are the most commonly used methods. When to use the Qualitative Research Design

ADVANTAGES
Qualitative techniques are extremely useful when a subject is too complex be answered by a simple yes or no hypothesis. These types of designs are much easier to plan and carry out. They are also useful when budgetary decisions have to be taken into account. The broader scope covered by these designs ensures that some useful data is always generated, whereas an unproved hypothesis in a quantitative experiment can mean that a lot of time has been wasted. Qualitative research methods are not as dependent upon sample sizes as quantitative methods; a case study, for example, can generate meaningful results with a small sample group.

DISADVANTAGES
Whilst not as time or resource consuming as quantitative experiments, qualitative methods still require a lot of careful thought and planning, to ensure that the results obtained are as accurate as possible. Qualitative data cannot be mathematically analyzed in the same comprehensive way as quantitative results, so can only give a guide to general trends. It is a lot more open to personal opinion and judgment, and so can only ever give observations rather than results. Any qualitative research design is usually unique and cannot be exactly recreated, meaning that they do lack the ability to be replicated.

Read more: http://www.experiment-resources.com/qualitative-researchdesign.html#ixzz1zmPLDeWy

RESEARCH HYPOTHESIS

A research hypothesis is the statement created by researchers when they speculate upon the outcome of a research or experiment.
by Martyn Shuttleworth (2008)

Every true experimental design must have this statement at the core of its structure, as the ultimate aim of any experiment. The hypothesis is generated via a number of means, but is usually the result of a process of inductive reasoning where observations lead to the formation of a theory. Scientists then use a large battery of deductive methods to arrive at a hypothesis that is testable, falsifiable and realistic.

The precursor to a hypothesis is a problem, usually framed as a question. The precursor to a hypothesis is a research problem, usually framed as a question. It might ask what, or why, something is happening. For example, to use a topical subject, we might wonder why the stocks of cod in the North Atlantic are declining. The problem question might be Why are the numbers of Cod in the North Atlantic declining? This is too broad as a statement and is not testable by any reasonable scientific means. It is merely a tentative question arising from literature reviews and intuition. Many people would think that instinct and intuition are unscientific, but many of the greatest scientific leaps were a result of hunches. The research hypothesis is a paring down of the problem into something testable and falsifiable. In the aforementioned example, a researcher might speculate that the decline in the fish stocks is due to prolonged over fishing. Scientists must generate a realistic and testable hypothesis around which they can build the experiment.

This might be a question, a statement or an If/Or statement. Some examples could be:

Is over-fishing causing a decline in the stocks of Cod in the North Atlantic? Over-fishing affects the stocks of cod. If over-fishing is causing a decline in the numbers of Cod, reducing the amount of trawlers will increase cod stocks.

These are all acceptable statements and they all give the researcher a focus for constructing a research experiment. Science tends to formalize things and use the If statement, measuring the effect that manipulating one variable has upon another, but the other forms are perfectly acceptable. An ideal research hypothesis should contain a prediction, which is why the more formal ones are favored. A hypothesis must be testable, but must also be falsifiable for its acceptance as true science. A scientist who becomes fixated on proving a research hypothesis loses their impartiality and credibility. Statistical tests often uncover trends, but rarely give a clear-cut answer, with other factors often affecting the outcome and influencing the results. Whilst gut instinct and logic tells us that fish stocks are affected by over fishing, it is not necessarily true and the researcher must consider that outcome. Perhaps environmental factors or pollution are causal effects influencing fish stocks. A hypothesis must be testable, taking into account current knowledge and techniques, and be realistic. If the researcher does not have a multi-million dollar budget then there is no point in generating complicated hypotheses. A hypothesis must be verifiable by statistical and analytical means, to allow a verification or falsification. In fact, a hypothesis is never proved, and it is better practice to use the terms supported or verified. This means that the research showed that the evidence supported the hypothesis and further research is built upon that. A research hypothesis, which stands the test of time, eventually becomes a theory, such as Einsteins General Relativity. Even then, as with Newtons Laws, they can still be falsified or adapted.

Read more: http://www.experiment-resources.com/researchhypothesis.html#ixzz1zmPWYQ00

WHAT IS RESEARCH / THE SCIENTIFIC METHOD?

What is Research? What is Empirical Research? What is the Scientific Method? Definition of Research Definition of the Scientific Method Definition of Science

STEPS
Steps of the Scientific Method - The scientific method has a similar structure to an hourglass - starting from general questions, narrowing down to focus on one specific aspect, then designing research where we can observe and analyze this aspect. At last, the hourglass widens and the researcher concludes and generalizes the findings to the real world.

AIMS OF RESEARCH
The general aims of research are:

Observe and Describe

Predict Determination of the Causes Explain Purpose of Research - Why do we conduct research? Why is it necessary?

ELEMENTS OF RESEARCH
Common scientific research elements are: Characterization - How to understand a phenomenon

Decide what to observe about a phenomenon How to define the research problem How to measure the phenomenon Hypothesis and Theory The research questions before performing research Almost always based on previous research Prediction What answers do we expect? Reasoning and logic on why we expect these results Observation or Experimentation Testing characterizations, hypothesis, theory and predictions Understanding a phenomenon better Drawing Conclusions

Read more: http://www.experiment-resources.com/research-process.html#ixzz1zmPwtq51

COMPARING QUANTITATIVE AND QUALITATIVE RESEARCH


What is the difference between quantitative and qualitative research? In a nutshell, quantitative research generates numerical data or information that can be converted into numbers.
by Experiment-Resources.com (2009)

Only measurable data are being gathered and analyzed in this type of research.

Qualitative Research on the other hand generates non-numerical data. It focuses on gathering of mainly verbal data rather than measurements. Gathered information is then analyzed in an interpretative manner, subjective, impressionistic or even diagnostic. Heres a more detailed point-by-point comparison between the two types of research:

1. Goal or Aim of the Research


The primary aim of a Qualitative Research is to provide a complete, detailed description of the research topic. Quantitative Research on the other hand focuses more in counting and classifying features and constructing statistical models and figures to explain what is observed.

2. Usage
Qualitative Research is ideal for earlier phases of research projects while for the latter part of the research project, Quantitative Research is highly recommended. Quantitative Research provides the researcher a clearer picture of what to expect in his research compared to Qualitative Research.

3. Data Gathering Instrument


The researcher serves as the primary data gathering instrument in Qualitative Research. Here, the researcher employs various data-gathering strategies, depending upon the thrust or approach of his research. Examples of data-gathering strategies used in Qualitative Research are individual in-depth interviews, structured and non-structured interviews, focus groups, narratives, content or documentary analysis, participant observation and archival research. On the other hand, Quantitative Research makes use of tools such as questionnaires, surveys and other equipment to collect numerical or measurable data.

4. Type of Data
The presentation of data in a Qualitative Research is in the form of words (from interviews) and images (videos) or objects (such as artifacts). If you are conducting a Qualitative Research what will most likely appear in your discussion are figures in the form of graphs. However, if you are conducting a Quantitative Research, what will most likely appear in your discussion are tables containing data in the form of numbers and statistics.

5. Approach

Qualitative Research is primarily subjective in approach as it seeks to understand human behavior and reasons that govern such behavior. Researchers have the tendency to become subjectively immersed in the subject matter in this type of research method. In Quantitative Research, researchers tend to remain objectively separated from the subject matter. This is because Quantitative Research is objective in approach in the sense that it only seeks precise measurements and analysis of target concepts to answer his inquiry.

DETERMINING WHICH METHOD SHOULD BE USED


Debates have been ongoing, tackling which method is better than the other. The reason why this remains unresolved until now is that, each has its own strengths and weaknesses which actually vary depending upon the topic the researcher wants to discuss. This then leads us to the question Which method should be used? The goals of each of the two methods have already been discussed above. Therefore, if your study aims to find out the answer to an inquiry through numerical evidence, then you should make use of the Quantitative Research. However, if in your study you wish to explain further why this particular event happened, or why this particular phenomenon is the case, then you should make use of Qualitative Research. Some studies make use of both Quantitative and Qualitative Research, letting the two complement each other. If your study aims to find out, for example, what the dominant human behavior is towards a particular object or event and at the same time aims to examine why this is the case, it is then ideal to make use of both methods.

Read more: http://www.experiment-resources.com/quantitative-and-qualitativeresearch.html#ixzz1zmQGfOFd

STATISTICS TUTORIAL
This statistics tutorial is a guide to help you understand key concepts of statistics and how these concepts relate to the scientific method and research.
by Experiment-Resources.com (2008)

Scientists

frequently

use statistics

to

analyze their

results. Why

do

researchers

use

statistics? Statistics can help understand a phenomenon by confirming or rejecting a hypothesis. It is vital to how we acquire knowledge to most scientific theories. You don't need to be a scientist though; anyone wanting to learn about how researchers can get help from statistics may want to read this statistics tutorial for the scientific method. What is Statistics?

RESEARCH DATA
This section of the statistics tutorial is about understanding how data is acquired and used. The results of a science investigation often contain much more data or information than the researcher needs. This data-material, or information, is called raw data. To be able to analyze the data sensibly, the raw data is processed into "output data". There are many methods to process the data, but basically the scientist organizes and summarizes the raw data into a more sensible chunk of data. Any type of organized information may be called a "data set". Then, researchers may apply different statistical methods to analyze and understand the data better (and more accurately). Depending on the research, the scientist may also want to use statistics descriptively or for exploratory research. What is great about raw data is that you can go back and check things if you suspect something different is going on than you originally thought. This happens after you have analyzed the meaning of the results. The raw data can give you ideas for new hypotheses, since you get a better view of what is going on. You can also control the variables which might influence the conclusion (e.g. third variables). In statistics, a parameter is any numerical quantity that characterizes a given population or some aspect of it.

CENTRAL TENDENCY AND NORMAL DISTRIBUTION


This part of the statistics tutorial will help you understand distribution, central tendency and how it relates to data sets. Much data from the real world is normal distributed, that is, a frequency curve, or a frequency distribution, which has the most frequent number near the middle. Many experiments rely on assumptions of a normal distribution. This is a reason why researchers very often measure

the central tendency in statistical research, such as the mean(arithmetic mean or geometric mean), median or mode. The central tendency may give a fairly good idea about the nature of the data (mean, median and mode shows the "middle value"), especially when combined with measurements on how the data is distributed. Scientists normally calculate the standard deviation to measure how the data is distributed. But there are various methods to measure how data is distributed: variance, standard deviation, standard error of the mean, standard error of the estimate or "range" (which states the extremities in the data). To create the graph of the normal distribution for something, you'll normally use the arithmetic mean of a "big enough sample" and you will have to calculate the standard deviation. However, the sampling distribution will not be normally distributed if the distribution is skewed (naturally) or has outliers (often rare outcomes or measurement errors) messing up the data. One example of a distribution which is not normally distributed is the Fdistribution, which is skewed to the right. So, often researchers double check that their results are normally distributed using range, median and mode. If the distribution is not normally distributed, this will influence which statistical test/method to choose for the analysis.

OTHER TOOLS

Quartile Trimean

HYPOTHESIS TESTING - STATISTICS TUTORIAL


How do we know whether a hypothesis is correct or not? Why use statistics to determine this? Using statistics in research involves a lot more than make use of statistical formulas or getting to know statistical software. Making use of statistics in research basically involves 1. Learning basic statistics 2. Understanding the relationship between probability and statistics 3. Comprehension of the two major branches in statistics: descriptive statistics and inferential statistics.

4. Knowledge of how statistics relates to the scientific method. Statistics in research is not just about formulas and calculation. (Many wrong conclusions have been conducted from not understanding basic statistical concepts) Statistics inference helps us to draw conclusions from samples of a population. When conducting experiments, a critical part is to test hypotheses against each other. Thus, it is an important part of the statistics tutorial for the scientific method. Hypothesis testing is conducted by formulating an alternative hypothesis which is tested against the null hypothesis, the common view. The hypotheses are tested statistically against each other. The researcher can work out a confidence interval, which defines the limits when you will regard a result as supporting the null hypothesis and when the alternative research hypothesis is supported. This means that not all differences between the experimental group and the control group can be accepted as supporting the alternative hypothesis - the result need to differ significantly statistically for the researcher to accept the alternative hypothesis. This is done using a significance test (another article). Caution though, data dredging, data snooping or fishing for data without later testing your hypothesis in a controlled experiment may lead you to conclude on cause and effect even though there is no relationship to the truth. Depending on the hypothesis, you will have to choose between one-tailed and two tailed tests. Sometimes the control group is replaced with experimental probability - often if the research treats a phenomenon which is ethically problematic, economically too costly or overly timeconsuming, then the true experimental design is replaced by a quasi-experimental approach. Often there is a publication bias when the researcher finds the alternative hypothesis correct, rather than having a "null result", concluding that the null hypothesis provides the best explanation. If applied correctly, statistics can be used to understand cause and effect between research variables. It may also help identify third variables, although statistics can also be used to manipulate and cover up third variables if the person presenting the numbers does not have honest intentions (or sufficient knowledge) with their results. Misuse of statistics is a common phenomenon, and will probably continue as long as people have intentions about trying to influence others. Proper statistical treatment of experimental

data can thus help avoid unethical use of statistics. Philosophy of statistics involves justifying proper use of statistics, ensuring statistical validity and establishing the ethics in statistics. Here is another great statistics tutorial which integrates statistics and the scientific method.

RELIABILITY AND EXPERIMENTAL ERROR


Statistical tests make use of data from samples. These results are then generalized to the general population. How can we know that it reflects the correct conclusion? Contrary to what some might believe, errors in research are an essential part of significance testing. Ironically, the possibility of a research error is what makes the research scientific in the first place. If a hypothesis cannot be falsified (e.g. the hypothesis has circular logic), it is not testable, and thus not scientific, by definition. If a hypothesis is testable, to be open to the possibility of going wrong. Statistically this opens up the possibility of getting experimental errors in your results due to random errors or other problems with the research. Experimental errors may also be broken down into Type-I error and Type-II error. ROC Curves are used to calculate sensitivity between true positives and false positives. A power analysis of a statistical test can determine how many samples a test will need to have an acceptable p-value in order to reject a false null hypothesis. The margin of error is related to the confidence interval and the relationship between statistical significance, sample size and expected results. The effect size estimate the strength of the relationship between two variables in a population. It may help determine the sample size needed to generalize the results to the whole population. Replicating the research of others is also essential to understand if the results of the research were a result which can be generalized or just due to a random "outlier experiment". Replication can help identify both random errors and systematic errors (test validity). Cronbach's Alpha is used to measure the internal consistency or reliability of a test score. Replicating the experiment/research ensures the reliability of the results statistically. What you often see if the results have outliers, is a regression towards the mean, which then makes the result not be statistically different between the experimental and control group.

STATISTICAL TESTS
Here we will introduce a few commonly used statistics tests/methods, often used by researchers.

RELATIONSHIP BETWEEN VARIABLES

The relationship between variables is very important to scientists. This will help them to understand the nature of what they are studying. A linear relationship is when two variables varies proportionally, that is, if one variable goes up, the other variable will also go up with the same ratio. A non-linear relationship is when variables do not vary proportionally. Correlation is a a way to express relationship between two data sets or between two variables. Measurement scales are used to classify, categorize and (if applicable) quantify variables. Pearson correlation coefficient (or Pearson Product-Moment Correlation) will only express the linear relationship between two variables. Spearman rho is mostly used for linear relationships when dealing with ordinal variables. Kendall's tau () coefficient can be used to measure nonlinear relationships. Partial Correlation (and Multiple Correlation) may be used when controlling for a third variable.

PREDICTIONS
The goal of predictions is to understand causes. Correlation does not necessarily mean causation. With linear regression, you often measure a manipulated variable. What is the difference between correlation and linear regression? Basically, a correlational study looks at the strength between the variables whereas linear regression is about the best fit line in a graph. Regression analysis and other modeling tools

Linear Regression Multiple Regression a Path Analysis is an extension of the regression model A Factor Analysis attempts to uncover underlying factors of something. The Meta-Analysis frequently make use of effect size

Bayesian Probability is a way of predicting the likelihood of future events in an interactive way, rather than to start measuring and then get results/predictions.

TESTING HYPOTHESES STATISTICALLY


Student's t-test is a test which can indicate whether the null hypothesis is correct or not. In research it is often used to test differences between two groups (e.g. between a control group and an experimental group). The t-test assumes that the data is more or less normally distributed and that the variance is equal (this can be tested by the F-test). Student's t-test:

Independent One-Sample T-Test

Independent Two-Sample T-Test Dependent T-Test for Paired Samples

Wilcoxon Signed Rank Test may be used for non-parametric data. A Z-Test is similar to a t-test, but will usually not be used on sample sizes below 30. A Chi-Square can be used if the data is qualitative rather than quantitative.

COMPARING MORE THAN TWO GROUPS


An ANOVA, or Analysis of Variance, is used when it is desirable to test whether there are different variability between groups rather than different means. Analysis of Variance can also be applied to more than two groups. The F-distribution can be used to calculate p-values for the ANOVA. Analysis of Variance

One way ANOVA Two way ANOVA Factorial ANOVA Repeated Measures and ANOVA

NONPARAMETRIC STATISTICS
Some common methods using nonparametric statistics:

Cohen's Kappa Mann-Whitney U-test Spearman's Rank Correlation Coefficient

OTHER IMPORTANT TERMS IN STATISTICS

Discrete Variables

Read more: http://www.experiment-resources.com/statistics-tutorial.html#ixzz1zmQxq3hz

You might also like