Professional Documents
Culture Documents
1. To maximize the utilization of human resources for the achievement of individual and
organizational goals
2. To provide an opportunity and comprehensive framework for the development of human
resources in an organization for full expression of their latent and manifest potentials
3. To locate, ensure, recognize and develop the enabling capabilities of the employees in the
organization in relation to their present and potential roles
4. To develop the constructive mind and an overall personality of the employees
5. To develop the sense of team spirit, team work and inter-team collaborations
6. To develop the organizational health, culture and effectiveness
7. To humanize the work in the organization
8. To develop dynamic human relationship; and
9. To generate systematic information about human resources
2. What are the steps must follow in the selection of instruments for HRD?
A. Selection is guided by the purpose, the group/individual with whom the instrument will
be used, and the confidence of the facilitator (familiarity with the instrument and its
conceptual framework). The most important element in selection is the facilitator’s
familiarity, or rather its mastery of the instrument and its theory. The facilitator should
first try out the instrument, if it is person focused he should use it on himself and be quite
clear about its various aspect and the interpretation of the scores. Out of the technical
aspects reliability, validity and objectivity are the most important. The instrument should
have high reliability and validity. However, instrument for HRD may have face validity
as the purpose is that of development, and not taking decisions of recruitment, promotion
etc.
5. What are the different types of measurement scale used in research? How they are
different from each other?
A. Variables differ in “how well” they can be measured, i.e., in how much measurable
information their measurement scale can provide. There is obviously some measurement
error involved in every measurement, which determines the “amount of information” that
we can obtain. Another factor that can determines the amount of information that can be
provided by a variable is its “type of measurement scale”. Specifically variables are
classified as a) nominal, b) ordinal, c) interval, d) ratio.
Nominal variables allow for only qualitative classification. That is they can be only
measured in terms of whether the individual items belongs to some distinctively different
categories, but we cannot quantify or even rank order those categories. For example, all
we can say is that two individuals are different in terms of variable A (e.g., they are of
different race), but we cannot say which one “has more” of the quality represented by the
variable. Typical example of nominal variables are gender, race, color, city etc.
Ordinal variables allow us to rank order the items we measure in terms of which has
less and which has more of the quality represented by the variable, but still they do not
allow us to say “how much more”. A typical example of an ordinal variable is the
socioeconomic status of the families. For example, we know the upper middle-class is
higher than middle class but we cannot say that it is 18% higher.
Also this very distinction between nominal, ordinal, and interval scales itself represents a
good example of an ordinal variable. For example we can say that nominal measurement
provides less information than ordinal measurement, but we cannot say “how much less”
or how this difference between ordinal and interval scale.
Interval variables allow us not only to rank order the items that are measured, but also
to quantify and compare the sizes of difference between them. For example, temperature,
as measured in degrees Fahrenheit or Celsius, constitutes an interval scale. We can say
that a temperature of 40 degrees is higher than a temperature of 30 degrees, and that an
increase from 20 to 40 degrees is twice as much as an increase from 30 to 40 degrees.
Ratio variables are very similar to interval variables; in addition to all the properties of
interval variables, they feature an identifiable absolute zero point, thus they allow for
statements such as X is two times more than Y. Typical examples of ratio scale are
measures of time or space. For example, as kelvin temperature scale is a ratio scale, not
only can we say that a temperature of 200 degrees is higher than one of 100 degrees, we
can correctly state that it is twice high. Interval scale do not have the ratio property.
6. What is reliability? What are the different types of reliability used in test?
A. Reliability refers to the consistency of scores obtained by the same persons when they are
reexamined with the same test on different occasions or different sets of equivalent items.
The concept of reliability underlies the computation of the error of measurement of a
single score, whereby we can predict the range of fluctuation likely to occur in a single
individual’s score as a result of irrelevant or unknown chance factors.
b. Alternate Form Reliability - One way of avoiding the difficulties encountered in test-
retest reliability is through the use of alternate forms of the test. The same persons can
thus be tested with one form on the first occasion and with another, equivalent form on
the second. The correlation between the scores obtained on the two forms represents the
reliability coefficient of the test.
Like test retest reliability, alternate form reliability should always be accompanied by a
statement of the length of the interval between test administrations, as well as a
description of relevant intervening experience. If the two forms are administered in
immediate succession, the resulting correlation shows reliability across forms only, not
across occasions. The error variance in this case represents fluctuations in performance
from one set of items to another, but not fluctuations over time
7. What is validity? And what are the different types of validity used in tests?
A. The validity of a test concerns what the test measures and how well it does so. It tells us
what can be inferred from the test scores. In this connection we should guard against
accepting the test name as an index of what the test measures. Test names provide short,
convenient labels for identification purposes.
Different types of validity used in tests are –
a. Content Validity - Content –description validation procedures involves
essentially the systematic examination of the test content to determine whether it
covers a representative sample of the behavior domain to be measured. Such a
validation procedure is commonly used in test design to measure how well the
individual has mastered a specific skill or course of study. Content validity is built
into a test from the outset through the choice of appropriate items.
b. Face Validity - Content validity should not be confused with face validity. The
latter is not validity in the technical sense; it refers not to what the test actually
measures, but to what it appears superficially to measure. Face validity pertains to
whether the test “looks valid” to the examinees who take it, the administrative
personnel who decide on its use, and other technically untrained observers.
8. How ‘t’ test is different from ANOVA? How independent sample ‘t’ test and paired
sample ‘t’ test are different?
A. Refer the Tests doc for answer.