You are on page 1of 5

1. What are the main aims, goals and objectives of HRD?

A. The main aims, goals, and objectives of HRD are:

1. To maximize the utilization of human resources for the achievement of individual and
organizational goals
2. To provide an opportunity and comprehensive framework for the development of human
resources in an organization for full expression of their latent and manifest potentials
3. To locate, ensure, recognize and develop the enabling capabilities of the employees in the
organization in relation to their present and potential roles
4. To develop the constructive mind and an overall personality of the employees
5. To develop the sense of team spirit, team work and inter-team collaborations
6. To develop the organizational health, culture and effectiveness
7. To humanize the work in the organization
8. To develop dynamic human relationship; and
9. To generate systematic information about human resources

2. What are the steps must follow in the selection of instruments for HRD?
A. Selection is guided by the purpose, the group/individual with whom the instrument will
be used, and the confidence of the facilitator (familiarity with the instrument and its
conceptual framework). The most important element in selection is the facilitator’s
familiarity, or rather its mastery of the instrument and its theory. The facilitator should
first try out the instrument, if it is person focused he should use it on himself and be quite
clear about its various aspect and the interpretation of the scores. Out of the technical
aspects reliability, validity and objectivity are the most important. The instrument should
have high reliability and validity. However, instrument for HRD may have face validity
as the purpose is that of development, and not taking decisions of recruitment, promotion
etc.

3. Explain how individual and groups test in psychometric is different?


A. Those that can be given to only one person at a time know as individual tests. The
examiner or test administrator (the person giving the test) gives the test to only one
person at a time, the same way the psychotherapists see only one person at a time.
A group test by contrast can be administered to more than one person at a time by a single
examiner, such as when as instructor gives everyone in the class a test at the same time.

4. Explain the Develli’s simple guidelines for item writings.


A. Define clearly what you want to measure. To do this use substantive theory as a guide
and try to make items as specific as possible
B. Generate an item pool: Theoretically, all items are randomly chosen from a universe of
item content. In practice, however care of selecting and developing the item is valuable.
Avoid redundant items. In the initial phases, you may want to write three or four items
for each one that will eventually be used on the test or scale
C. Avoid exceptionally long items. Long items are often confusing and misleading
D. Keep the level of reading difficulty appropriate for those who will complete the scale.
E. Avoid “double-barreled” items that convey two or more ideas at the same time. For
example consider an item that ask the respondents to agree or disagree with the
statement, “I vote democratic because I support social programs”. There are two different
statements with which the person could agree: “I vote Democratic” and “I support social
support”.

5. What are the different types of measurement scale used in research? How they are
different from each other?
A. Variables differ in “how well” they can be measured, i.e., in how much measurable
information their measurement scale can provide. There is obviously some measurement
error involved in every measurement, which determines the “amount of information” that
we can obtain. Another factor that can determines the amount of information that can be
provided by a variable is its “type of measurement scale”. Specifically variables are
classified as a) nominal, b) ordinal, c) interval, d) ratio.

Nominal variables allow for only qualitative classification. That is they can be only
measured in terms of whether the individual items belongs to some distinctively different
categories, but we cannot quantify or even rank order those categories. For example, all
we can say is that two individuals are different in terms of variable A (e.g., they are of
different race), but we cannot say which one “has more” of the quality represented by the
variable. Typical example of nominal variables are gender, race, color, city etc.

Ordinal variables allow us to rank order the items we measure in terms of which has
less and which has more of the quality represented by the variable, but still they do not
allow us to say “how much more”. A typical example of an ordinal variable is the
socioeconomic status of the families. For example, we know the upper middle-class is
higher than middle class but we cannot say that it is 18% higher.
Also this very distinction between nominal, ordinal, and interval scales itself represents a
good example of an ordinal variable. For example we can say that nominal measurement
provides less information than ordinal measurement, but we cannot say “how much less”
or how this difference between ordinal and interval scale.

Interval variables allow us not only to rank order the items that are measured, but also
to quantify and compare the sizes of difference between them. For example, temperature,
as measured in degrees Fahrenheit or Celsius, constitutes an interval scale. We can say
that a temperature of 40 degrees is higher than a temperature of 30 degrees, and that an
increase from 20 to 40 degrees is twice as much as an increase from 30 to 40 degrees.

Ratio variables are very similar to interval variables; in addition to all the properties of
interval variables, they feature an identifiable absolute zero point, thus they allow for
statements such as X is two times more than Y. Typical examples of ratio scale are
measures of time or space. For example, as kelvin temperature scale is a ratio scale, not
only can we say that a temperature of 200 degrees is higher than one of 100 degrees, we
can correctly state that it is twice high. Interval scale do not have the ratio property.

6. What is reliability? What are the different types of reliability used in test?
A. Reliability refers to the consistency of scores obtained by the same persons when they are
reexamined with the same test on different occasions or different sets of equivalent items.
The concept of reliability underlies the computation of the error of measurement of a
single score, whereby we can predict the range of fluctuation likely to occur in a single
individual’s score as a result of irrelevant or unknown chance factors.

Different types of reliability used in test are –


a. Test-Retest Reliability - The most obvious method for finding the reliability of test
scores is by repeating the identical test on a second occasion. The reliability coefficient in
this case is simply the correlation between the scores obtained by the same persons on the
two administrations of the test. Retest reliability shows the extent to which scores on a
test can be generalized over different occasions; the higher the reliability, the less
susceptible the scores are to the random daily changes in the condition of the test takers
or of the testing environment.

b. Alternate Form Reliability - One way of avoiding the difficulties encountered in test-
retest reliability is through the use of alternate forms of the test. The same persons can
thus be tested with one form on the first occasion and with another, equivalent form on
the second. The correlation between the scores obtained on the two forms represents the
reliability coefficient of the test.
Like test retest reliability, alternate form reliability should always be accompanied by a
statement of the length of the interval between test administrations, as well as a
description of relevant intervening experience. If the two forms are administered in
immediate succession, the resulting correlation shows reliability across forms only, not
across occasions. The error variance in this case represents fluctuations in performance
from one set of items to another, but not fluctuations over time

c. Split-Half Reliability - From a single administration of one form of a test it is possible


to arrive at a measure of reliability by various split half procedures. In such a way two,
scores are obtained for each person by dividing the test into equivalent halves. It is
apparent that split-half reliability provide a measure of consistency with regard to content
sampling. To find split-half reliability, the first problem is how to split the test in order to
obtain the most nearly equivalent halves.
A procedure that is adequate for most purposes is to find the scores on the odd and even
items of the test. One precaution to be observed in making such an odd-even split pertains
to groups of items dealing with a single problem. Such as questions referring to a
particular mechanical diagram or to a given passage in a reading test.

7. What is validity? And what are the different types of validity used in tests?
A. The validity of a test concerns what the test measures and how well it does so. It tells us
what can be inferred from the test scores. In this connection we should guard against
accepting the test name as an index of what the test measures. Test names provide short,
convenient labels for identification purposes.
Different types of validity used in tests are –
a. Content Validity - Content –description validation procedures involves
essentially the systematic examination of the test content to determine whether it
covers a representative sample of the behavior domain to be measured. Such a
validation procedure is commonly used in test design to measure how well the
individual has mastered a specific skill or course of study. Content validity is built
into a test from the outset through the choice of appropriate items.

b. Face Validity - Content validity should not be confused with face validity. The
latter is not validity in the technical sense; it refers not to what the test actually
measures, but to what it appears superficially to measure. Face validity pertains to
whether the test “looks valid” to the examinees who take it, the administrative
personnel who decide on its use, and other technically untrained observers.

c. Concurrent & Predictive Validity - Concurrent validation procedures indicate


the effectiveness of a test in predicting an individual’s performance in specified
activities. The concurrent validity measure against which test scores are validated
may be obtained at approximately at the same time at the test scores or after a
stated interval. The term “prediction” can be used in the broader sense, to refer to
prediction from the test to any criterion situation, or in the more limited sense
prediction over a time interval. The logical distinction between predictive and
concurrent validation is based not on time but on objectives of testing. The
difference can be illustrated by asking -
“Does Smith qualify as a satisfactory pilot?”
or
“Does Smith have the prerequisites to become satisfactory pilot?”
The first question calls for concurrent validation; the second for predictive
validation.

8. How ‘t’ test is different from ANOVA? How independent sample ‘t’ test and paired
sample ‘t’ test are different?
A. Refer the Tests doc for answer.

9. What is grounded theory? What is the logic behind it?


A. GT is both a strategy for doing research and a particular style of analyzing the data
arising from that research. Each of these aspects has a particular set of procedures and
techniques. It is not a theory in itself, except perhaps in the sense of claiming that the
preferred approach to theory development is via the data you collect. While grounded
theory is often presented as appropriate for studies which are exclusively qualitative,
there is no reason why some of the quantitative data collection should not be included.

Logic behind Grounded Theory –


 Simultaneous involvement in data collection and analysis phases of research
 Developing analytic codes and categories from data, not from preconceived hypotheses
 Constructing middle-range theories to explain behavior and processes.
 Memo writing, that is explicate notes to fill out categories
 Making comparisons between data and data, data and concept, concept and concept.
 Theoretical sampling, that is, sampling for theory construction to check and refine
conceptual categories, not for representativeness of a given population.
 Delaying the literature review until after forming analysis.

10. What is projective technique? Is it a suitable HRD instrument in your view?


A. Projective Test presents respondents with an ambiguous stimuli and asks them to
disambiguate it.

a. Designed to reveal hidden emotions and internal conflicts


b. Content from projective test is analysed for meaning

Because of the following reasons, it cannot be recommended –

a. Complex Scoring System


b. Questionable Norms
c. Subjectivity of Scoring
d. Poor Predictive Validity
e. Inadequate Validity
f. Extensive time needed to learn
g. Heavy reliance on Psychoanalytic Theory

Objective Tests are more time & cost effective.

You might also like