978-0077825362 Chapter 10 Part 2

subject Type Homework Help
subject Pages 9
subject Words 2470
subject Authors Eugene Zechmeister, Jeanne Zechmeister, John Shaughnessy

Unlock document.

This document is partially blurred.
Unlock all pages and 1 million more documents.
Get Access
page-pf1
16
Copyright © 2015 McGraw-Hill Education. All rights reserved. No reproduction or distribution without the prior
written consent of McGraw-Hill Education.
LEARNING BY DOING RESEARCH
This chapter compares basic and applied research, and includes an introduction to program evaluation
and the more general question of the role of experimentation in society. Students appreciate the
opportunity to apply what they have learned about research methods to social issues. One way to give
students such an opportunity is to ask them to write a research proposal guided by one or more of the
questions asked by program evaluators (assessment of needs, process, outcome, and efficiency).
Proposals involving program evaluation allow for review of a wide range of research methods
(observation, survey research, archival research, experiments, and quasi-experiments). Doing an applied
research proposal could be one option for students in completing the research proposal poster
assignment we outlined in the “Learning by Doing Research” section of Chapter 8. Or, the assignment
could be done on a smaller scale in the context of a small-group discussion assignment in which students
would work together in class to apply program evaluation to a social issue.
Students could develop applied research proposals on a variety of topics. They could examine the effect
of social programs like Head Start, bilingual education programs, health care, or programs directed
toward reducing abuse of alcohol or other drugs. If students choose this alternative, however, they will
need to do background research on the particular social issue (although students sometimes have the
needed background from their other course work). Another possible topic for an applied research
proposal could be campus issues or programs that are amenable to program evaluation (see the first
discussion topic described in the Issues and Problems for Class Discussion section of this chapter).
Another possibility is for the instructor to provide the background on an issue that could be evaluated
using program evaluation. Then students could propose programs to be evaluated and describe how they
would use program evaluation in doing so. The last approach may work best as a small group discussion
assignment. Regardless of the issue being addressed or the format of the assignment, the key to
achieving the goal of this assignment is to give students the chance to see firsthand that research
methods can be applied to social issues.
INSTRUCTOR’S LECTURE/DISCUSSION AIDS
The following pages reproduce content from Chapter 10 and may be used to facilitate lecture or
discussion.
1. Applied Research: This page describes the goal of applied research and introduces quasi-
experiments and program evaluation.
2. True Experiments: This page addresses characteristics of true experiments, obstacles to conducting
true experiments in natural settings, and threats to internal validity.
3-7. Threats to Internal Validity: The first four pages in this set define and illustrate the eight major threats
to internal validity; the fifth summarizes the role of a comparison group in eliminating threats and
identifies threats not eliminated by true experiments.
8. Quasi-Experiments: Essential features of quasi-experiments are described, as is the one-group
page-pf2
17
pretest-posttest (“bad experiment”) design on this page.
9-11. Quasi-Experimental Designs: These three pages describe and illustrate the nonequivalent control
group design, the simple interrupted time series design, and the time series with nonequivalent
control group design.
12. Program Evaluation: This page describes the major goals of program evaluation.
13. Basic and Applied Research: The reciprocal nature between basic and applied research is described
on this page.
page-pf3
18
Applied Research
Goals
Test the external validity of laboratory findings
Improve conditions in which people live and work
Natural settings
Quasi-experiments
Procedures that approximate the conditions of highly controlled
laboratory experiments
Program evaluation
Applied research to learn whether real-world treatments work
page-pf4
19
True Experiments
Manipulate Independent Variable (IV)
Treatment, Comparison conditions
High degree of control
o Especially random assignment to conditions
Unambiguous outcome regarding effect of IV on DV
o Internal validity
Obstacles to conducting true experiments in real-world settings
Permission from authorities
Difficult to gain access to participants
Random assignment perceived as unfair
o People want a “treatment”
o Random assignment is best way to determine whether treatment is
effective
o Use “waiting-list” control group or alternate treatments
Advantages of true experiments
8 general threats to internal validity are controlled
o History
o Maturation
o Testing
o Instrumentation
o Regression
o Subject Attrition
o Selection
o Additive Effects with Selection
page-pf5
20
Threats to Internal Validity
History
When an event occurs at the same time as treatment and changes
participants’ behavior
Participants’ “history” includes events other than treatment.
Hard to infer treatment has an effect
Example:
Does a campus recycling-awareness
program influence the amount of paper,
plastic, and cans in campus bins?
History threat: Suppose at Week 4 (X =
treatment) a popular celebrity starts to
promote recycling in the media.
Can you conclude the campus
awareness campaign was effective?
Maturation
Participants naturally change over time.
These maturational changes, not treatment, may explain any
changes in participants during an experiment.
Example
Does a new reading program improve
2nd graders’ reading comprehension?
Maturation threat: Reading
comprehension improves naturally as
children mature over the year.
Can you conclude the reading program
was effective?
0
20
40
60
80
Pre Post
Reading Comprehension
0
10
20
30
40
50
60
70
1 2 3 4 X 5 6 7 8
Recycling (Kg)
Week
page-pf6
21
Threats to Internal Validity, continued
Testing
Taking a test generally affects subsequent testing.
Participants’ performance on a measure at the end of a study may
differ from an initial testing because of their familiarity with the
measure.
Example
Does teaching a new problem-solving
strategy influence people’s ability to solve
problems quickly?
If similar problems are used in the pre-
test, faster problem solving at post-test
may be due to familiarity with the test.
Can we conclude the new strategy
improves problem-solving ability?
Instrumentation
Instruments used to measure participants’ performance may change
over time (e.g., observers may become tired or bored).
Changes in participants’ performance may be due to changes in
instruments used to measure performance, not treatment.
Example
Suppose a police protection program
is designed to decrease the incidence
of assault.
At the same time the program is
implemented (X), state reporting laws
are changed such that what
constitutes assault is broadened.
Can we conclude the program was
effective (or ineffective)?
0
4
8
12
16
Pre Post
Minutes (Mean)
0
10
20
30
40
50
1 2 3 4 X 5 6 7 8
Assaults
page-pf7
22
Threats to Internal Validity, continued
Regression
Individuals sometimes perform very well or poorly due to chance.
Chance factors are not likely present during 2nd testing, so scores will
not be as extreme.
Scores will “regress” (go toward) the mean.
Regression effects, not treatment, may account for changes in
participants’ performance over time.
Example
Suppose students are selected for an
accelerated program because of their
very high scores on a brief test.
Regression: To the extent the test is an
unreliable measure of ability, their
scores will likely regress at 2nd testing.
Can we conclude the enrichment
program was effective (or ineffective)?
Subject Attrition
When participants are lost from the
study (attrition), group equivalence formed at the start of the study
may be destroyed.
Differences between treatment and control groups at the end of study
may be due to natural differences in those who remain in each group.
Example
Suppose an exercise program is offered to employees who would like
to lose weight.
At Time 1, N = 50, M weight = 225 lbs.
At Time 2, N = 25 (25 drop out)
Suppose the 25 who stayed in the
program weighed, on average, 150
pounds at Time 1.
Did the exercise program help people
to lose weight?
0
10
20
30
40
50
60
70
80
90
100
Pre Post
Test Score (Mean)
0
50
100
150
200
250
Time 1 Time 2
mean Weight
page-pf8
23
Threats to Internal Validity, continued
Selection
Occurs when differences exist between individuals in treatment and
control groups at the start of a study
These differences become alternative explanations for any
differences observed at the end of the study.
Random assignment controls the selection threat.
Example
Suppose a community recycling
program is tested. People interested in
recycling are encouraged to participate.
Evaluation: Compare the weight of
garbage of people in the program to
those not in the program.
Is the new recycling program effective?
Additive Effects with Selection
When one group of participants responds differently to an external
event (history), matures differently, or is measured more sensitively
by a test (instrumentation), these threats (rather than treatment) may
account for any group differences at the end of a study.
Example
Suppose School A starts a
program (X) to prevent alcohol
abuse on campus (Week 4). The
DV is number of alcohol-related
infractions in student residences.
School B is a comparison.
Suppose that during Week 4 the
newspaper at School A reports a
student death due to intoxication
(“local history effect”).
Is the program effective?
0
5
10
15
20
25
30
35
40
Recycle Not in
Program
mean lbs/week
Group
0
2
4
6
8
10
12
1 2 3 4 X 5 6 7 8
# Infractions
Week
School A
School B
page-pf9
24
Threats to Internal Validity, continued
Points to remember
When there is no comparison group in a study, these threats must be
ruled out:
o history, maturation, testing, instrumentation, regression, subject
attrition, selection
When there is a comparison group, these threats must be ruled out:
o selection, additive effects with selection, differential regression
Adding a comparison group helps rule out many threats to internal
validity
Rule out potential threats by analyzing the research situation for
potential history threats, etc.
Threats even true experiments may not eliminate
Contamination
o resentment, rivalry, diffusion of treatment
Experimenter expectancy effects
Novelty effects (including Hawthorne effects)
Threats to external validity
o Treatment effects may not generalize
o Best method for assessing external validity of findings: replication
page-pfa
25
Quasi-Experiments
“Quasi-“ (resembling) Experiments
Important alternative when true experiments are not possible
Lack the high degree of control found in true experiments
o Often no random assignment
Researchers must seek additional evidence to eliminate threats to
internal validity.
The One-Group Pretest-Posttest Design
Example of a “bad experiment” or “pre-experimental design”
Intact group is selected to receive a treatment
o e.g., a classroom of children or group of employees
Pretest is 1st Observation (O1)
Treatment is implemented (X)
Posttest is 2nd Observation (O2)
Design is illustrated as: O1 X O2
No threats to internal validity are controlled.
Any change between pretest (O1) and posttest (O2) may be due to
treatment OR
o History
o Maturation
page-pfb
26
o Testing
o or instrumentation, regression, subject attrition
Quasi-Experimental Designs
Nonequivalent Control Group Design
A group similar to the treatment group serves as a comparison group.
Obtain pretest and posttest scores for individuals in both groups.
Random assignment to groups is not used.
Pretest scores are used to determine if groups are equivalent.
o Equivalent only on this dimension
Design
treatment
O1 X O2 treatment group
_ _ _ _ _ _ _ _
O1 O2 nonequivalent control group
pretest
posttest
Example
Does taking a research methods
course improve reasoning ability?
Compare students in research
methods and developmental
psychology courses.
DV: 7-item test of methodological
and statistical reasoning
By adding a comparison group,
rule out: history, maturation,
testing, instrumentation,
regression.
Assume these threats happen the same to both groups, therefore
0
1
2
3
4
5
6
7
Pre Post
Mean Reasoning Score
Methods Developmental
page-pfc
27
can’t be used to explain group differences.
Selection and Additive Effects with Selection not ruled out.
Quasi-Experimental Designs, continued
Simple Interrupted Time Series Design
Observe a DV for some time before and after a treatment is M
Archival data are often used.
Look for clear discontinuity in the time-series data for evidence of
treatment effect.
Design: O1 O2 O3 O4 X O5 O6 O7 O8
Example Discontinuity
Suppose an intervention is designed
to improve students’ study habits.
Implemented during the summer after
sophomore year (after semester 4)
DV: Semester GPA
Suppose a discontinuity is observed.
What threats can be ruled out?
o Maturation: Assume maturational changes are gradual, not abrupt.
o Testing (GPA): If testing influences performance, these effects
likely show up in initial observations.
o Regression: If scores regress, they’re likely to do so in initial
observations.
0
1
2
3
4
1 2 3 4 5 6 7 8
Mean GPA
Semester
page-pfd
28
Quasi-Experimental Designs, continued
Time Series with Nonequivalent Control Group Design
Add comparison group to the simple interrupted time series design.
Design
O1 O2 O3 O4 X O5 O6 O7 O8
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
O1 O2 O3 O4 O5 O6 O7 O8
Example: Study Habits
Add a nonequivalent control group. These students don’t participate
in the study habits course.
Who could be in the comparison group?
What threats can you rule out?
Program Evaluation
0
1
2
3
4
1 2 3 4 5 6 7 8
Mean GPA
Semester
Treatment Control
page-pfe
29
Goal: Applied research
Provide feedback to administrators of human service organizations to
help them decide
What services to provide
Whom to provide services to
How to provide services most effectively, efficiently
Big growth area (especially health care)
Program evaluators assess social services
Needs, process, outcome, efficiency
Needs
Is an agency or organization meeting the needs of the people it
serves?
o Survey research designs
Process
How is a program being implemented (is it going as planned)?
o Observational research designs
Outcome
Has a program been effective in meeting its stated goals?
o Experimental, quasi-experimental research designs; archival data
Efficiency
Is a program cost-effective relative to alternative programs?
page-pff
30
Basic and Applied Research
Basic research and applied research in reciprocal relationship
Basic research provides scientifically based principles about behavior
and mental processes.
These principles are applied in complex, real-world settings.
New complexities are recognized and new hypotheses are tested
using basic research.

Trusted by Thousands of
Students

Here are what students say about us.

Copyright ©2022 All rights reserved. | CoursePaper is not sponsored or endorsed by any college or university.