Query
Template: /var/www/farcry/projects/fandango/www/action/sherlockFunctions.cfm
Execution Time: 4.11 ms
Record Count: 1
Cached: Yes
Cache Type: timespan
Lazy: No
SQL:
SELECT top 1 objectid,'cmCTAPromos' as objecttype
FROM cmCTAPromos
WHERE status = 'approved'
AND ctaType = 'moreinfo'
objectidobjecttype
11BD6E890-EC62-11E9-807B0242AC100103cmCTAPromos

How-To Guide of the Month – Program Evaluation

Assessment, Evaluation, and Research Graduate New Professional
August 24, 2020 Annie Cole Pacific University

Program evaluation is an important part of continuous improvement and accountability in higher education. This brief how-to guide will define program evaluation, explore its major components, and provide resources for continued learning.

 

What is program evaluation?

Program Evaluation is“a process that consists in collecting, analyzing, and using information to assess the relevance of a public program, its effectiveness and its efficiency” (Josselin & Le Maux, 2017, p. 1-2). It can also be described as “the application of systematic methods to address questions about program operations and results. It may include ongoing monitoring of a program as well as one-shot studies of program processes or program impact. The approaches used are based on social science research methodologies and professional standards” (Newcomer, Hatry, & Wholey, 2015, p. 8). A third definition describes program evaluation as “the systematic collection and analysis of information about the process and outcomes of a program in order to make improvements or judgments about the quality or value of the program” (Chyung, 2015, p. 83). Program evaluation is used to enhance decision making, and “examines programs to determine their worth and to make recommendations for programmatic refinement and success” (Spaulding, 2014, p. 5).

 

The aim of program evaluation is to answer questions about a program’s performance and value. Program evaluation should have three outcomes: Assess program implementation, assess program results, and highlight methods of program improvement (Newcomer, Hatry, & Wholey, 2015).

 

Program evaluation differs from research in that research aims to develop new knowledge about a phenomenon, while evaluation aims to measure the value or quality of a subject which is context-specific, and not always shared with the public. While “measurement is about assessing the characteristics of something, evaluation involves an additional step of drawing an evaluative conclusion based on the measured results” (Chyung, 2015, p. 81)

 

What defines a “program”?

A Program can be understood as “a set of specific activities designed for an intended purpose, with quantifiable goals and objectives” (Spaulding, 2014, p. 5) or “a set of resources and activities directed toward one or more common goals, typically under the direction of a single manager or management team. A program may consist of a limited set of activities implemented at many sites by two or more levels of government and by a set of public, nonprofit, and even private providers” (Newcomer, Hatry, & Wholey, 2015, p. 7).

 

Why is program evaluation necessary?

Although there are many reasons that program evaluation is necessary, one of the main reasons is to show accountability: “We live in an era of accountability, where various stakeholders at the national, state, and local level expect to see results from program implementation as displayed through program evaluation” (Newcomer, Hatry, & Wholey, 2015, p. xvii).

 

How do I know if I should complete a program evaluation?

Investing in program evaluation must be worth the time, energy, cost, and potential outcomes. To evaluate the value of engaging in an evaluation, ask yourself and your colleagues these five questions:

  1. Can the results of the evaluation influence decisions about the program?
  2. Can the evaluation be done in time to be useful?
  3. Is the program significant enough to merit evaluation?
  4. Is program performance viewed as problematic?
  5. Where is the program in its development? (Newcomer, Hatry, & Wholey, 2015, p. 10)

The value of program evaluations is influenced by three factors: the outcomes, or evidence they produce, the credibility of the methods used for evaluation, and how the outcomes are translated into practical changes.

  1. Deciding what data needs to be collected
  2. Determining data analysis methods
  3. Deciding how the results will be used (Newcomer, Hatry, & Wholey, 2015)

 

The program evaluation process

If you have determined that program evaluation is appropriate, you can begin designing and facilitating the evaluation.

  1. Clarify the purpose of your evaluation.
  • Is the purpose of evaluation formative or summative? What function does the evaluation serve? A Formative evaluation will be completed during the developmental stage of a program, being processed-focused and focused on providing suggestions for change and quality improvement. A Summative evaluation is conducted post-implementation, is outcomes-focused, and judges the quality of the program (Chyung, 2015).
  • When will the evaluation will conducted? Evaluations can be conducted before or after a program is implemented; a front-end evaluation may assess needs or possibilities, while a back-end evaluation assesses outcomes and impact (Chyung, 2015)
  • Is the purpose of evaluation to judge merit or worth? A judgement of program Merit is focused on internal quality and is context-independent, while a judgment of program Worth focuses on external value and is context-dependent (e.g. tied to culture, budget, and needs) (Chyung, 2015).
  • Is the purpose of evaluation internal methods of improvement or accountability to outside stakeholders (Newcomer, Hatry, & Wholey, 2015)?
  • Is the purpose of evaluation ongoing or one-shot (Newcomer, Hatry, & Wholey, 2015)? In other words, will you complete the evaluation at once or over multiple phases?
  1. Choose an appropriate program evaluation approach

Based on the purpose of your evaluation, choose an evaluation approach that will help you meet the evaluation goals.

  • Objectives-Based Approach – Asks whether program meets stated objectives. Objectives are written by program creator and evaluator, often using benchmarks.
  • Decision-Based Approach – Questions serve as criteria, not objectives or benchmarks.
  • Goal-Free Approach – Criteria emerge as the evaluator tries to determine criteria through the data collection process. The approach attempts to reduce bias through pre-existing criteria.
  • Participatory Approach – Stakeholders select the criteria, with a focus on exploring the lived experience of the stakeholders.
  • Expertise-Oriented Approach – Criteria are internalized by a judge who is an expert in the field; serves as judge, not evaluator. No formal criteria are identified; based on years of expert experience.
  • Consumer-Based Approach – Criteria are used for rating/judging a product or program, generally outside of education. An example of this approach is a consumer report (Spaulding, 214, p. 44)

 

  1. Design the evaluation & develop a logic model

Program evaluation experts offer different advice on how to design a program evaluation. Most experts suggest developing a logic model and/or an evaluation matrix.

A logic model can help program evaluation planning in three ways: it presents program components and outcomes in a clear manner, and it helps the evaluator decide which features to evaluate (and develop evaluation questions) (Lawton, Brandon, Ciccinelli, & Kekahio, 2014).

For example, a logic model can be developed before the program evaluation begins as a way to explore which aspects of the program will be evaluated. The evaluator can use the table below to begin developing a logic model (Kekahio, Cicchinelli, Lawton, & Brandon, 2014).

 

Resources

(also called inputs; material and non-material items needed to develop and implement program.

 

Activities

(process, actions, events; how program achieves its outcomes; training, collaborating, collecting data)

 

Outputs

(results; number of attendees, trainings, books read, etc.)

 

Outcomes

(short and long-term; beliefs, behaviors, influence, graduation rate, etc.

 

 

 

 

 

 

Once the table above has been filled in with program information, the evaluator can review the table to decide which aspects (resources, activities, outputs, and/or outcomes) will be evaluated. Specific objectives and/or evaluation questions can be developed based on this logic model.

 

An evaluation matrix is used to lay out the design of the program evaluation. The matrix is completed after the program components have been reviewed through logic modeling. Spaulding (2014) and Newcomer, Hatry, and Wholey (2015) offer sample evaluation matrix tables:

 

Developing an Evaluation Matrix (Spaulding, 214, p. 19)

Evaluation Objective

(statement)

Stakeholders

Tools used to collect data

When to be collected

Purpose (Summative/ Formative)

 

 

 

 

 

 

Sample Design Matrix (Newcomer, Hatry, & Wholey, 2015, p. 26)

Researchable Question(s)

Criteria and Information Required and Sources

Scope and Methodology, Including Data Reliability

Limitations

What This Analysis will likely allow [our group] to say? (i.e. what are the expected results?)

 

 

 

 

 

 

When developing the evaluation matrix, the evaluator can consider the objectives of the evaluation, stakeholders involved, data collection instruments and sources of data, methodological approaches (experiments, case study, etc.), use of technology to collect/extract data, data analysis methods, and reporting plans. At this stage, decide how the final report will be created and distributed, and how the findings of the report will be used to influence program changes.

 

How will you measure your data? Choosing valid and reliable instruments

When choosing a measure, ask yourself:

  1. Are the measures relevant to the activity, process, or behavior being assessed?
  2. Are the measures important to citizens and public officials?
  3. What measures have other experts and evaluators in the field used?
  4. What do program staff, customers, and other stakeholders believe is important to measure?
  5. Are newly constructed measures needed, and are they credible?
  6. Do the measures selected adequately represent the potential pool of similar measures used in other locations and jurisdictions? (Newcomer, Hatry, & Wholey, 2015, p. 16)

 

Writing the evaluation report

Although report types and content vary, evaluation reports generally include the following elements (Spaulding, 2014):

  1. Cover page
  2. Executive summary
  3. Introduction
  4. Methods
  5. Body of report

Be sure to develop a plan for who will create the report, with whom it will be shared, when it will be completed by and distributed, and how the results will be used.

 

Be open to changes as the process proceeds

Program evaluation is an iterative process that can change as the process unfolds. The purpose of the evaluation that is identified in the beginning may change through the design and data collection phases. Being flexible and open to adjustments is extremely helpful, especially for first time program evaluators.

 

References and Resources

Chyung, A. Y. (2015). Foundational concepts for conducting program evaluations. Performance Improvement Quarterly, 27(4), 77-96.

Josselin, J., & Le Maux, B. (2017). Statistical tools for program evaluation: Methods and applications to economic policy, public health, and education. Springer International Publishing.

Kekahio, W., Cicchinelli, L., Lawton, B., & Brandon, P. R. (2014). Logic models: A tool for effective program planning, collaboration, and monitoring. (REL 2014–025). Washington, DC: U.S. Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance, Regional Educational Laboratory Pacific. Retrieved from http://ies.ed.gov/ ncee/edlabs.

Lawton, B., Brandon, P.R., Cicchinelli, L., & Kekahio, W. (2014). Logic models: A tool for designing and monitoring program evaluations. (REL 2014–007). Washington, DC: U.S. Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance, Regional Educational Laboratory Pacific. Retrieved from http://ies.ed.gov/ncee/ edlabs.

Newcomer, K. E., Hatry, H. P., & Wholey, J. S. (2015). Planning and designing useful evluations. In K. E. Newcomer, H. P. Hatry, & J. S. Wholey (Eds.), Handbook of practical program evaluation (4th ed., pp. 7-35). John Wiley & Sons, Inc.

Patton, M. Q. (2008). Utilization focused evaluation. (4th ed.). Sage.

Rea, L. M., & Parker, R. A. (2005). Designing and conducting survey research: A comprehensive guide. Jossey-Bass.

Rogers, P. J. (2005). Logic models. In S. Mathison (Eds.), Encyclopedia of evaluation (pp. 232-235). Sage.

Spaulding, D. T. (2014). Program evaluation in practice: Core concepts and examples for discussion and analysis (2nd ed.). John Wiley & Sons, Inc.

  1. Newcomer, Hatry, & Wholey (2015) suggest consulting three sources when selecting an evaluation approach:Joint Committee on Standards for Educational Evaluation. www.jcsee.org
  2. American Evaluation Association. www.eval.org
  3. Essential Competencies for Program Evaluators Self-Assessment. www.cehd.umn

 

Helpful websites: