Home > Uncategorized > Introduction to Evaluation Research

Introduction to Evaluation Research

Weiss defines evaluation as “the systematic assessment of the operation and/or the outcomes of a program or policy, compared to a set of explicit or implicit standards, as a means of contributing to the improvement of the program or policy” (1998, p. 4). In her previous book, Weiss (1972) defines evaluation research as “an elastic word that stretches to cover judgments of many kinds” (p.1).

The focus of evaluation research is on evaluating an event and to make judgment about its usefulness. This type of research is probably not truly quantitative due to the elements of value judgment made by the researcher.

In terms of methodology, a consensus exists with respect to the fact that both quantitative and qualitative methods have an important place in programme evaluation (Clarke and Dawson, 1999). Impact evaluation uses the canonical research procedures of social sciences. In addition, Clarke and Dawson mention that the importance of systematic evaluative research as a phenomenon across the Social Sciences has been evident in recent years.

Evaluation is inherently political: what happens when a new technology is introduced is both affected by organizational and implementation processes, as well as affecting them. Evaluation, too, is political in nature because it concerns needs, values, and interests of different stakeholders. Evaluation can be used to influence system design, development, and implementation. While results of post-hoc or summative assessments may influence future development, formative evaluation, which precedes or is concurrent with the processes of systems design, development, and implementation, can be a helpful way to incorporate people, social, organizational, ethical, legal, and economic considerations into all phases of a project.

Weiss (1998, pp. 20-28) identifies several purposes for evaluating programs and policies. They include the following:

1. Determining how clients are faring

2. Providing legitimacy for decisions

3. Fulfilling grant requirements

4. Making midcourse corrections in programs

5. Making decisions to continue or culminate programs

6. Testing new ideas

7. Choosing the best alternatives

8. Recording program history

9. Providing feedback to staff

10. Highlighting goals

Process evaluation focuses on “what the program actually does” (Weiss, 1998, p. 9). Process indicators are somewhat similar to performance measures, but they focus more on the activities and procedures of the organization than on the products of those activities.

Any evaluation method that involves the measurement of quantitative/ numerical variables probably qualifies as a quantitative method, and many of the methods already examined fall into this broad category. Among the strengths of quantitative methods are the evaluator can reach conclusions with a known degree of confidence about the extent and distribution of that the phenomenon; they are amenable to an array of statistical techniques; and they are generally assumed to yield relatively objective data (Weiss, 1998, pp. 83-84).

Experimental methods usually, hut not always, deal with quantitative data and are considered to be the best method for certain kinds of evaluation studies. Indeed, “the classic design for evaluations has been the experiment. It is the design of choice in many circumstances because it guards against the threats to validity” (Weiss, 1998, p. 215). The experiment is especially useful when it is desirable to rule out rival explanations for outcomes. In other words, if a true experimental design is used properly, the evaluator should be able to assume that any net effects of a program are due to the program and not to other external factors.

As is true for basic research, qualitative methods are becoming increasingly popular. In fact, “the most striking development in evaluation in recent years is the coming of age of qualitative methods. Where once they were viewed as aberrant and probably the refuge of those who had never studied statistics, now they are recognized as valuable additions to the evaluation repertoire” (Weiss, 1998, p. 252).

Weiss (1998) reminds us that the evaluator should also give careful thought to the best time to conduct the evaluation, the types of questions to ask, whether one or a series of studies will be necessary, and any ethical issues that might be generated by the study.

Inconsistent data collection techniques, biases of the observer, the data collection setting, instrumentation, behaviour of human subjects, and sampling can affect the validity and/or reliability of measures. The use of multiple measures can help to increase the validity and reliability of the data. They are also worth using because no single technique is up to measuring a complex concept, multiple measures tend to complement one another, and separate measures can be combined to create one or more composite measures (Weiss, 1998).

The basic tasks of data analysis for an evaluative study are to answer the questions that must be answered in order to determine the success of the program or service, and the quality of the resources. “The aim of analysis is to convert a mass of raw data into a coherent account. Whether the data are quantitative or qualitative, the task is to sort, arrange, and process them and make sense of their configuration. The intent is to produce a reading that accurately represents the raw data and blends them into a meaningful account of events” (Weiss, 1998, p. 271). Those questions should, of course, be closely related to the nature of what is being evaluated and the goals and objectives of the program or service. In addition, the nature of the data analysis will be significantly affected by the methods and techniques used to conduct the evaluation.

Most data analyses, whether quantitative or qualitative in nature, will employ some of the following strategies: describing, counting, factoring, clustering, comparing, finding commonalities, examining deviant cases, finding co-variation, ruling out rival explanations, modeling, and telling the story. Evaluators conducting quantitative data analyses will need to be familiar with techniques for summarizing and describing the data; and if they are engaged in testing relationships or hypotheses and/or generalizing findings to other situations, they will need to utilize inferential statistics (Weiss, 1998).

As part of the planning, the evaluator should have considered how and to whom the findings will be communicated and how the results will be applied. A good report will be characterized by clarity, effective format and graphics, timeliness, candour about strengths and weaknesses of the study, and generalizability (Weiss, 1998), as well as by adequacy of sources and documentation, appropriateness of data analysis and interpretation, and basis for conclusions.

References:

Clarke, A. & Dawson, R. 1999. Handbook of Evaluation Research, An Introduction to Principles, Methods and Practice, SAGE Publications.

Weiss, C.H. 1972. Evaluation Research: Methods of Assessing Program Effectiveness. Englewood Cliffs, N.J.: Prentice-Hall.

Weiss, C. H. 1998. Evaluation: Methods for Studying Programs and Policies (2nd ed.). Upper Saddle River, NJ: Prentice Hall.

Advertisements
Categories: Uncategorized
  1. No comments yet.
  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: