Communication Center  Conference  Projects Share  Reports from the Field Resources  Library  LSC Project Websites  NSF Program Notes
 How to Use this site    Contact us  LSC-Net: Local Systemic Change Network
Best Practices

Queries and Replies

Discussions

Bulletin Board

Discussion: From Professional Development to Student Outcomes


 previous post
 
 next post
 main /index
posted by: Mark St. John on April 23, 1998 at 10:51AM
subject: Reflections
This is a relatively long message. I post it as a way to explain my
thinking re the evaluation of professional development in terms of
student outcomes. I think it reflects the deep and interwoven nature of
learning, and, indeed of the "system" we are trying to improve and
reform... In what follows I write down the stream of thought that has
emerged for me as I struggle with the reality of trying to assess the
value of a professional development program by measuring student
achievement in the classes of those who participate:

As an "evaluator" of privately and publicly funded programs, I am always
being asked to determine 1) "if the project has met its goals" or 2) if
the project has succeeded in achieving its ultimate goal of "improving
student achievement". After many years, I find that both of these
questions leave me flat. While at first glance they appear perfectly
reasonable, they both actually are not deep or intelligent questions,
and, worse, they may even be counterproductive.

Take the first question: ... Who cares if the project has met its goals?
Maybe the goals were dumb to begin with... Or maybe they were too
lofty.. And If the answer is yes, then what? If no, then what? There
is no way at present to use the measured success of one project (which
is very specific to those who implemented in a specific set of
curcumstances at a specific time) to create "replicas" which can be
widely disseminated... Projects are not pills...and use the careful
measurement of the achievement of goals creates a kind of knowledge
that does not take one very far...

Take the second question, and apply it to a professional development
program like the LSCs... That is, ask the following question: Can you
PROVE that this professional development program was directly
responsible for improving student achievement in the classrooms of the
participating teachers?

Let me make some points about why this question is essentially a poor
one to ask ... and that the very asking of it demonstrates a lack of
understanding (or a willingness to understand) the actual complexity of
"student achievement"...(In fact, in my more cynical moments I would say
this question is a very good indicator of a very weak understanding of
the educational process.)

1) It is very difficult to measure "what a student knows and is able to
do".

There are great efforts being made to develop test items that are valid
and reliable measures of very simple types of knowledge and skills.. It
is much more difficult to measure the "higher order" knowledges and
skills of the type that are espoused in the national standards, for
example... Thus, simply assessing what one student knows in a given
domain at a given time is very difficult to do ...

2) To measure the increase in a student's knowledge or skills over time
is even more difficult... That is, one needs to know the INCREASE in
knowledge and skills, not jsut what a student knows at an absolute
level.... The measurement of change in knowledge inevitably involves
subtracting some sort of pre-test measure from a post test measure...
(Or equally difficult trying to compare a "treated group" with an
equivalent "untreated" group) And when one subtracts (or compares) two
numbers that have large uncertainties in them, one gets a number with a
VERY large uncertainty in it... That makes it almost impossible to
accurately assess small changes in higher order skills.... Also, the
assessments themselves are typically designed to measure absolute levels
of knowledge and skills, and they may be very insensitive to small
changes.... Thus the point is that it is very very hard to measure
accurately small changes in achievement...

..The signal to noise ratio here is very very small...

3) In spite of this, let us pretend that we are able to determine that a
student has in fact grown, over the course of a year, in his or her
abilities and knowledge... Now what accounts for a student's growth ?
In fact, there are many many possible inputs that may have contributed
to this growth... there is the natural maturation of the student...
there are many influences outside of school that both promote and
inhibit growth.. and there are many experiences within school and within
the classroom that contribute to changes in ability... Thus, even if we
could measure accurately the growth of a student's ability in, say,
doing inquiry, or communicating in mathematics, or even carrying out
long division... there are many many possible contributors to that
growth....

4) Also, let us be clear that a child's knowledge is cumulative... Most
of what a student knows and is able to do in seventh grade is not due to
his or her experience in seventh grade... Rather it is due to what
happens both in and out of school over the time up to seventh grade...
Thus, it is a very real (and very common mistake) to assume that a
student's achievement is somehow due to the experiences he or she had
during the past several months in that grade with that teacher... This
is only the tip of the iceberg....Thus, assigning attribution for the
development of a particular skill to a particular classroom experience
is to deny the long term, cumulative, and non linear nature of
learning... Perhaps what a child is suddenly able to do in seventh grade
is the slow maturation of experiences that took place in grades 5 and
6.... Thus, what a child know and is able to do ...is only very
partially a product of the experience they are having in a particular
classroom with a particular teacher in a particular year...

5) BUT lets even pretend that it is possible to assign credit to a
particular classroom for a particular increase in student learning.
Then we still need to know what it was about that classroom that made it
effective. And here again the number of contributors is huge... That
is, the instruction that takes place in a given classroom on a given
day is the product of very many factors... .. (This is the very essence
of the argument for Systemic Change).. The instruction that actually
happens is influenced by the state and local curriculum, the available
texts and materials, the time of the year, the state and local
assessments, the school culture, the department or grade level
priorities etc... Perhaps we should attribute increase in student
achievement to a reduced class size, to the introduction of new state or
district standards, to the presence of a new assessment system, to a new
curriculum in use, to an effort to restructure schools etc... All of
these are in addition to the variables that are associated with the
teacher -- his or her content knowledge and background, their pre
service experiences, their teaching history, their involvement in
professional associations, and, yes, perhaps even some professional
development experiences they may have had....

SO.... let us review the argument made this far... increases in student
achievement on important concepts and skills is very hard to measure...
and there are very many determinants of such achievement and of any
increases in achievement... and one possible contributor is the
classroom of the current year ... and there are many many factors that
contribute to the nature and quality of the instruction in that
classroom .... only on of which is what the teacher chooses and is able
to do....

6) And if we keep going backwards into this causal ladder... we will
find that the teacher's practice is shaped, in part, by his or her
knowledge and beliefs.. .. and these are in turn formed over many years
by many experiences.... Quite frankly, one professional development
program, no matter how wonderful, comprises only one small part of what
that teachers does, knows, and believes...

7) Just as with student learning, teacher learning is cumulative and non
linear. .. When exactly does a professional development experience "kick
in"? Does it improve instruction immediately? Does it take a year or
two? How much and what kind of PD experience counts here? What if a
professional development experience leads a teacher to other programs
and other experiences? Who gets the "credit"? What happens when a
teacher involved in an LSC uses a new curriculum at a grade level that
is interested in thematic teaching in a school that is undergoing
restructuring in a district that has a new scope and sequence in a state
with new assessments? What is causing what here?

Professional learning follows what I think of as a "waring blendor"
process... Each new experiences is ground up in the waring blender of
past experiences and knowledge, and it becomes impossible to extract the
sources of knowledge and experience that contribute to the final
stew.... And, I think, paradoxically, the more profound an experience
is, the more it becomes assimilated, and the more unindentifiable the
source becomes....

8) Thus, professional development may well contribute strongly to a
teacher's knowledge and believes but it is only one input... and the
inputs are highly interactive with each other.... so that classroom
instruction is a stew that is uniquely and inexplicably shaped by all of
these interactive factors...

Let us say we had a classroom situation where student achievement of the
kinds of skills and knowledge espoused in the national standards seemed
very high... What is likely to be the cause ??? My guess is that you
would find a WHOLE FABRIC of high quality ... good schools, good
curriculum, professional teachers, supportive community etc... Not to
mention a likelyhood of an upper SES community, and very strong family
support system... etc... This positive situation might also likely
involve a strong professional development system... But it is one set of
threads that is adding to the strength of the fabric... Without the
other threads.. there is nothing there...no matter how strong the prof
dev threads...

ANALOGIES...

Let me end this diatribe with some analogies to make the point
further... And remember.. all I am doing here is trying to dispel the
idea that it is possible or even sensible to assess professional
development by measuring student achievement... (Note that this is very
different from saying that the ULTIMATE goal of professional development
is to improve student achievement. )

If a teacher is meant to be a "guide" or a "coach" that assists student
learning , then we ought to be able to use that metaphor to look at
other domains... Let us take, for example, a baseball coach... If a
baseball coach attended a 3 day clinic on the latest techniques in
hitting, would we assess the quality of that clinic by measuring the
team's win/loss record in the next year.... (Remember again there may
be a few other variables that affect how the team is doing...)

If a cook goes to a cooking class, does that mean that the patrons will
be happier at the restaurant? (Somehow we always picture these
situations in our minds by holding the constraints and other variables
constant...) Could we evaluate the quality of the cooking class by
asking the patrons about the quality of the restaurant? (What if the
restaurant changed "superintendents" every two years, couldnt get the
materials he chef wanted, and patrons came and went on a frequent
basis, and there was an overwhelming demand for hamburgers and
appreciation of little else?)

If a pilot attends a flight training clinic, does it mean that the
airline will have a better safety record? Seems logical... but it does
not necessarily follow... Can we evaluate the training clinic by looking
at the airline safety record? (Consider a situation where the planes
are designed by pilots on task force during the summer, and passengers
are asked to bring maps from home, and where lead pilots are asked to
train the other pilots on Saturday mornings...)

So, going back to the whole fabric notion, professional development is
a NECESSARY BUT NOT SUFFICIENT, condition for a healthy and robust
educational system... trying to evaluate it impact in the absence of the
other critical factors makes little sense....

........................................................................

What I have said up to this point argues that it is simplistic to try to
assess professional development by looking at student achievment,
particularly in those cases where other critical dimensions of the
system are lacking or are not aligned... But this does not mean that
one can not look in productive ways at the ways in which and the extent
to which professional development can contribute to the overall fabric
of the system that determines the quality of instruction. That is what
I hope we can do as we think about how the best ways to identify and
assess the real contributions of professional development.

The fact that professional development should not be evaluated on the
basis of student achievement does not mean that there can not be a
productive focus on students in the conceptualization and design of
professional development... Hence professional development that focuses
of real classrooms, on student work, on the postulates of national
standards are all quite legitimate...

AND it may be possible to identify the ways in which PD adds to the
capacity of the system and enhances the probability that high quality
instruction will occur ...

OR it may be possible to assess the QUALITY of professional development
per se.. ..and use that assessment to make judgments about its merit....

I was in a school the other day where the teachers were all quite
motivated and doing quite a good job in a difficult situation. They
were working with students from a nearby housing project and using many
innovative strategies to help them learn the basics as well as
experience inquiry, critical thinking etc. .. But these teachers were
discouraged ... their school had just been put on a list of district
schools that were "inadequate"... The reason for this was that only 38%
of the fourth graders had scored above adequacy on the district math
test... (They needed more than 40% to avoid the label of inadequate...)

Note that there was no attempt to determine the "value added" of this
school to the students. (Perhaps only 5% would have scored above
adequate without the programs this school offered.) And note that there
are probably other schools who score better but who add little or
nothing to the knowledge their students bring with them...

I relate this story because it shows the great tendency for state and
district administrators (politicians?) to use over simple and just plain
wrong assessment procedures... Had they assessed value-added or quality
of instruction or improvement over time I suspect they would have come
up with very different findings....

Thus, I hope we can work together to think hard and carefully about
assessment and the inferences that are to be made from various
measures... I am always astounded that in science education there are
methods of assessment and conclusions drawn that would be seen as
flawed, if not fraudelent, in the scientific disciplines....

I think it was Alfred North Whitehead who said..." For every complex
problem there is a simple, and wrong, solution..." ]

I urge us all to be more careful and thoughtful as we proceed in what
are essentially murky political waters.
 main /index

 previous post
 
 next post