Action research and evaluation on line

Session 10: Evaluation as action
research

 

 

This is Session 10 of areol, action research and evaluation on line, a 14-week public course offered as a public service by Southern Cross University and the Southern Cross Institute of Action Research (SCIAR).

...in which the relevance of the earlier sessions on action research, participation and rigour, are noted as relevant to the practice of evaluation, and in which the motives of the evaluator and those who employ her are identified as very important

 


Have you ever worked in an organisation where there was regular appraisal? If so, how well did it work?  When you were to be appraised, did you look forward to it?  If you were the appraiser, how pleasant did you find it?  For both appraiser and appraisee, how could it have been improved?
     Appraisal is a form of evaluation.  If evaluation is regarded in the same way as appraisal, what implications does that have for how it is done?

 

In this session

The centrepiece of this session is an examination of appraisal in work settings.  I've done that because that is where the systems are often most formalised and the effects most easily identified.

But I think the conclusions you draw have importance in other settings too: communities, families, social occasions.  I invite you to keep these other settings in mind.  As well, I think you can extend some of your conclusions to evaluation in those settings.

 

This is the first of a number of sessions on approaches to evaluation which draw on action-research-like processes and principles.

Action research, I think, lends itself to evaluation.  (You could almost say that action research is evaluation, of a sort.  You think;  you act;  you evaluate.)  Next session I'll describe a particular evaluation process, the Snyder process, which applies many action research principles.

 

A form of evaluation -- appraisal

Before we come to that, though, I invite you to consider the nature of evaluation.  Here is a thought experiment...

 


Word goes around that your employers have hired someone to evaluate the productivity and effectiveness of all the people in the section or organisation.

Rumour has it that a detailed report on each employee will be given to your immediate superior, among others, though perhaps not to you.


 

What would be some of the thoughts that go through your mind when you hear this?  What would be your hopes?  Your fears?

My prediction is that you would be curious about the results.

However, I suspect you would also have some doubts.  I think you would wonder about the accuracy of the results.  I would expect you to be at least a little apprehensive about the motivation of the organisation in hiring the evaluator.

I'd expect, too, that you would have some misgivings about the use that might be made of the results.  I wouldn't be surprised if you were a bit apprehensive about who would report the results to you, and how, and what they would do about the results.

But read on...

 


You arrive at work one morning to find an envelope on your desk.
When you open it, there is a note inside, from the evaluator, and a
further envelope.

The note says:

"Inside the envelope is a report on your productivity and a comparison
 with the productivity of your colleagues.  I acknowledge that they may
 not be entirely accurate though I've tried hard to make them as accurate
as I can.

"No-one has seen these results except me.  I have no intention of
 reporting them to anyone else.  You may take the envelope and destroy
 it.   You can open it and read the report.   You can  discuss it with others
 if you wish.

"If you read it, it is then entirely your decision what to do about it, if
anything.   If you would like to do something I would be pleased to
 provide whatever help I can. 
"


 

Would you open the inner envelope and read the report?

With what hopes and fears?

Has your attitude changed at all?   If so, in what respects?

A little more positive, I would guess, but still with some apprehension and perhaps even anger.

What could the evaluator have done for your attitudes to be more positive?

 

As a term, "evaluation" has a bad odour in some circles.  This arises mostly, I think, from doubts about the motives of evaluators and those who hire them.  I think the consequence is clear.  If we wish someone to act on our evaluation reports we may need to be clear about their motives and about ours.

I have some biases here.  It's simpler for both of us if I reveal them.  (You are at liberty to decide your own biases, or avoid any bias if you can do so.)

To my mind the single most important aspect of evaluation is the motive of its users -- is it to enable people to do better what they want to do?  Or is it to control them?  (This applies to more than just evaluation.  It's also true of leadership, and teaching, and parenthood, and consulting, and increasingly -- at least her in Australia -- of politics.  And, for that matter, of living.)

My experience has been that most people would much rather do a good job than a poor job.  (Interpret "job" as broadly as you wish.)  Some have given up hope of being able to do so.  Many are still trying.  Some of those who have given up can be retrieved.

So enablement makes a lot more sense to me than control.  In what follows I'll try to describe evaluation processes from that perspective.  You can, of course, choose your own perspective.

 

By now it is probably clear why I think of action research and evaluation as closely related.  To my mind, both function well when they pursue understanding and change.  For reasons of both ethics and practicality, both pursuits may benefit from wide participation. 

 

Evaluation throughout action research

As I said, I'll be describing a specific process.  I'll also be referring you to archived resources which describe it in some detail.  However, you could take any overall action research approach and apply it to an evaluation task.  That is, pretty much, what happened in one of the case studies reported earlier.

 For example, consider the cycle described in one of the early sessions:

 

 
You might start an evaluation with as broad a question as "What goes on around here?" -- or, as in the evaluation case study, "Tell me about ...".  The answers can then direct you to a better sample, better questions, better methods.  Eventually, better answers.

  
Consider, too, one other cycle which we examined:

 

 
You will recall that this can also be regarded as critical reflection before, during and after action.  In other words, each phase of each cycle of action research contains an evaluative component.

 

  

Evaluation as action and research

This quote from Lee Cronbach et.  al. captures my biases well:

"The distinction between studies that ask how good a service is and
  those that ask how the service can be improved has been around for
  decades.  [...]  As we see it, evaluations are used almost entirely in a
  formative manner when they
are used. "

("Formative" means, roughly, addressing the question: How can this program be improved?   Not: How good is this program? -- which is the approach of summative evaluation.)

The emphasis in previous sessions has been on action research, and its pursuit of the twin aims of action and research.  Think back over those sessions...

The early sessions were about participation.  It is conventional for evaluators to maintain some independence from the stakeholders.  The stakeholders are, most commonly, involved only as informants.  If you wish the evaluation to lead to successful change, you might wish to consider the appropriate level of participation.

The more recent sessions have been about rigour.  Their thrust has been towards those means of achieving rigour which also allow flexibility, responsibility, and participation.  You might like to consider if, using these methods, you might achieve an adequate level of rigour and participation at the same time.

 

For me, the issues in evaluation are the same as those canvassed earlier.  Virtue can still be found in good entry and contracting, relevant stakeholder involvement, and good relationships.  Virtue can also be found in multiple sources of information, cyclic processes, and a tireless quest for disconfirming evidence. 

 

Notes

  1. Cronbach, Lee J.; Ambron, Sueann R.; Dornbusch, Sanford M.; Hess, Robert D.; Hornik, Robert C.; Phillips, D.C.; Walker, Decker F.; and Weiner, Stephen S.  (1980) Toward reform of program evaluation: aims, methods and institutional arrangements.  San Francisco: Jossey-Bass. back ]

 

Archived resources

There is an archived paper "qualeval" on evaluation.  It uses the Snyder process to illustrate some points about evaluation in general, and qualitative evaluation in particular.  It is written from an action research perspective.  The URLs are

http://www.uq.net.au/action_research/arp/qualeval.html
ftp://ftp.scu.edu.au/www/arr/qualeval.txt

There are also some relevant bibliographies on the archive.

Patricia Rogers has prepared a bibliography on evaluation and meta-evaluation.  You'll find it in the archive with the name meta-eval-bib.  The URLs are:

http://www.uq.net.au/action_research/arp/meta-eval-bib.html
ftp://ftp.scu.edu.au/www/arr/meta-eval-bib.txt

 Marcia Conner has included some material on evaluation in her training and development bibliography.  It's in the archive, with the title trdbooks.txt.  The URLs are:

http://www.uq.net.au/action_research/arp/trdbooks.html
ftp://ftp.scu.edu.au/www/arr/trdbooks.txt

The action research bibliography "biblio", which is mostly annotated, contains quite a few works on evaluation.  The URLs are:

http://www.uq.net.au/action_research/arp/biblio.html
ftp://ftp.scu.edu.au/www/arr/biblio.txt

 

Some other archived resources (not specifically mentioned in this session but some mentioned in previous sessions) describe various forms of data collection.  All of these can be used for action research, or for evaluation:

voting     the use of voting techniques to collapse long lists, or arrange them in priority.   The URLs are

http://www.uq.net.au/action_research/arp/voting.html
ftp://ftp.scu.edu.au/www/arr/voting.txt

delphi     mentioned previously, a dialectic process:

http://www.uq.net.au/action_research/arp/delphi.html
ftp://ftp.scu.edu.au/www/arr/delphi.txt

focus     a structured form of focus group, as previously mentioned

http://www.uq.net.au/action_research/arp/focus.html
ftp://ftp.scu.edu.au/www/arr/focus.txt

gfa     group feedback analysis, an alternative to survey-feedback

http://www.uq.net.au/action_research/arp/gfa.html
ftp://ftp.scu.edu.au/www/arr/gfa.txt

search     a future oriented goal-setting or visioning process which can be used for data collection when agreement is likely to be easily reached

http://www.uq.net.au/action_research/arp/search.html
ftp://ftp.scu.edu.au/www/arr/search.txt

options     a dialectical process for choosing between two alternatives

http://www.uq.net.au/action_research/arp/options.html
ftp://ftp.scu.edu.au/www/arr/options.txt

 

Other reading

John Owen has written a brief and relatively-practical overview of evaluation:

Owen, J.M.  (1993) Program evaluation: forms and approaches.  St Leonards, NSW: Allen & Unwin.

For reference I like Michael Scriven's alphabetically-arranged encyclopedia.  It's not bedtime reading, but I think it's valuable:

Scriven, M.  (1991) Evaluation thesaurus, fourth edition.  Newbury Park, Ca.: Sage.

I believe a fifth edition is due for publication shortly (and may even be available by now).

For a detailed description of an action-research-like evaluation approach, Egon Guba and Yvonna Lincoln have a process they call fourth generation evaluation.  The detailed description is:

Guba, E.G.  and Lincoln, Y.S.  (1989)  Fourth generation evaluation.  Newbury Park, Ca.: Sage.

If you can get hold of it, here is a brief and readable overview of some of the features of their approach:

Guba, E.G.  and Lincoln, Y.S.  (1990) Fourth generation evaluation: an 'interview' with Egon Guba and Yvonna Lincoln.  Evaluation Journal of Australia, 2(3), 3-14.

They take a strong constructivist approach (put over-simply, they seem to assume the world is in people's imagination, not "out there").  Whether or not you would go as far in this regard as they do, I think you can still use their approach to good effect.

 

There are a number of high-quality mailing lists which deal with evaluation.  I quite like the list "evaltalk", sponsored by the American Evaluation Association.  The regular participation of such people as Michael Scriven, Jerome Winston, Patricia Rogers, and many other experienced evaluators lifts the quality of material.

To subscribe, send the message

      subscribe evaltalk your_first_name your_last_name
      (e.g.  subscribe evaltalk Marie Curie)
 to   listserv@ua1vm.ua.edu

Similar comments can be made about the more-recent govteval.  As its name suggests, it focuses more on public sector evaluation.  Less apparent is that it is more cosmopolitan in its approach, in my view, than evaltalk.

To subscribe, send the message

      subscribe govteval
 to   majordomo@nasionet.net

(Its listserver is "majordomo", which gets upset if you include your name.)

 

Activities

A thought experiment

There's one, above, in the text of this session.

An individual activity

Most people are starved of feedback, most of the time.  Other people tell us very little about how we "come across".  It's hard for us to know how others experience us.  Soliciting feedback can be a useful experience in several ways, in addition to sensitising us to some of the traps of giving feedback.

Choose some aspect of your behaviour or performance that you'd like to know more about.  Choose two or three people who can comment on that aspect.  It may work better in social or community settings than at work.

I suggest you do it in four phases:

  1. Guess at what you think they will report.
  2. Approach them, and first talk with them about the difficulty of getting accurate feedback.
  3. Ask for the feedback
  4. Say it back to them, in your own words, to check that you have understood fully.

Take some time after each conversation to record what you learned about yourself, and about giving and getting feedback.

 

For your learning group

The individual activity, above, is a useful precursor to this activity.

In your learning group, working individually, list the main strengths and weaknesses of your group as a whole.  List, also, the most important contribution that each group member (including yourself) makes to the group.  Then compare notes.

 


Let's practice action research on areol.  What ideas do you have for improving this session? What didn't you understand? What examples and resources can you provide from your own experience? What else? If you are subscribed to the email version, send your comments to the discussion list. Otherwise, send them to Bob Dick

 

In this session we've begun to explore some of the "big picture" aspects of evaluation.  We've touched on the use of an action research approach to evaluation, and on the presence of evaluative components within the action research cycle.  I've suggested that evaluation (like many endeavours) may be used to control people, or to enable them to be more effective.  I've indicated that the emphasis in areol will be on evaluation as action research, for enablement.

The next session begins an examination of the Snyder evaluation process.  See you then.  --  Bob

_____

 

Copyright © Bob Dick 2002.  May be copied provided it is not included in material sold at a profit, and this and the following notice are shown.

This document may be cited as follows:

Dick, B.  (2002) Evaluation as action research Session 10 of Areol - action research and evaluation on line.
URL http://www.uq.net.au/action_research/areol/areol-session10.html

 


 

 

 

Maintained by Bob Dick; this version 11.04w; last revised 20020712

A text version of this file is available at
ftp://ftp.scu.edu.au/www/arr/areol-session10.txt