This document discusses validity and reliability in research instruments. It defines validity as measuring what the instrument intends to measure and lists four types of validity: face validity, content validity, construct validity, and criterion-related validity. Reliability refers to consistency of results and there are two methods discussed for establishing reliability: test-retest reliability and internal consistency. The document also lists criteria for assessing quantitative measures and factors to consider in planning data collection such as finances, schedule, and personnel.
2. Discuss
among
yourselves
2
a. How do you know if you have
chosen the correct instrument?
b. Will the contents of your
instrument lead you to answer
your stated problem?
c. What do you think are the
characteristics of an acceptable
research instrument ?
5. 1. Face Validity
5
Also known as logical validity, face
validity involves an analysis of
whether the instrument is using a
valid scale. The researcher
determines face validity by looking
at the features of the instrument.
6. 2. Content Validity
6
This is determined by studying the
questions to see whether they are
able to elicit the necessary
information. An instrument with
high content validity has to meet the
objectives of the research.
7. 3. Construct Validity
7
This refers to whether the test
corresponds to its theoretical
construct. It is concerned with the
extent to which particular measure
relates to the other measures and its
consistency with the theoretically-
derived hypothesis.
9. A. Concurrent
9
It deals with measures that can be
administered and validated at the same
time. It is determined by administering both
the new test and the standardized test to a
group of respondents, then finding the
correlation between the two sets of scores.
10. B. Predictive
10
It refers to how well the test predicts some
future behavior of the examinees. This is
particularly useful and important for
aptitude tests which attempt to predict how
well test-takers will do in the future setting.
11. Reliability
11
It refers to the consistency of results.
A reliable instrument yields the
same results for individuals who
take the test more than once.
13. 1. Test-retest or Stability test
13
The same test is given to a group of
respondents twice. The scores in the first test
are correlated with the scores in the second
test. When there is a high correlation index,
it means that there is a high reliability of the
test.
14. 2. Internal Consistency
14
If the test in question is designed to measure a
single basic concept, it is reasonable to
assume that a respondent who gets one item
right is likely to correctly answer another
item that is similar.
15. 2. Internal Consistency
15
a. Split half is a method of establishing internal
consistency wherein the test is given only once to
the respondents.
b. Kuder-Richardson test is a method that
measures the extent to which items in one form of
a test share commonalities with one another as do
the items of an equivalence test form. It is also
called item-total correlation.