Evaluation Anxiety and Stakeholder Behaviors

A Scripted Freelance Writer Writing Sample

Evaluation Anxiety (EA) has been referred to as a set of responses (attitudinal and behavioral) that result from concern over possible negative consequences related to performance in an evaluation context (Donaldson, Gooler, & Scriven, 2002). It is discussed as potentially damaging to evaluations, sometimes leading to unproductive behaviors among stakeholders. It is possible that project staff and leaders could benefit from getting familiar with EA and ways to address it. By understanding factors related to organizational-based anxiety toward evaluation, leaders can better identify them and work with stakeholders to confront them.

I pose the following scenario to bring out some considerations:

A program director invites the new evaluator with 12 years of experience into a staff meeting to discuss the next year of program implementation. Recipients of the program are expected to implement certain practices in clinical settings. Knowing these are multiple site projects and that the group has reportedly been accustomed to such larger scale projects, the evaluator asks to see any measures used to document program fidelity in the past. The group's responses suggest that assessing program fidelity is time consuming and hasn't been given priority in the past- no measures exist. After further discussion about the programs, the evaluator recommends developing systematic interim reports on a quarterly basis to inform the group about "outputs" (head counts and professional characteristics of those attending the programs) and "initial outcomes" (measures on knowledge and intent to transfer knowledge into practice). One person is interested in why one would provide such regular reports, since the goal is to influence practice change down the road? "Aren't we really only concerned with what the final report says" Two members later openly question the evaluator's expertise to other members, thinking the new evaluator may just want job security through what appears to be excessive data collection and analyses.

The stakeholders of this scenario may be exhibiting EA. One sign is accusing evaluators of having hidden agendas (Donaldson et al., 2002). Another indication is stalling or protesting the use of evaluation results, as when interim reports for program improvement were downplayed. Sources of anxiety here may include minimal experience with program evaluation and fears about negative findings. If this group is undergoing EA as a result of previous negative experiences with evaluation, the evaluator could find ways to address those issues through open communication.

The evaluator would be wise to strategize a bit with these stakeholders. S/he could work to ensure balanced reporting, highlighting positive results or conclusions along with negative ones. The evaluator, working with consortium leadership, can work to raise the involvement of these stakeholders. In facilitating member involvement, the evaluator can collaborate with them to develop guiding questions or lend their content expertise to program refinement and/or interpretation. The evaluator may also consider disseminating or discussing the professional standards for evaluation, http://www.jcsee.org/ (JCSEE, 2011) with this group, to further lend credibility to the process.

References:

Donaldson, S.I., Gooler, L.E., & Scriven, M. (2002). Strategies for managing evaluation anxiety. The American Journal of Evaluation (23), p. 261-273.

Joint Committee on Standards for Educational Evaluation (JCSEE) in Yarbrough, D. B., Shulha, L. M., Hopson, R. K., and Caruthers, F. A. (2011). The program evaluation standards: A guide for evaluators and evaluation users (3rd ed.). Thousand Oaks, CA: Sage


Julianne R

Washington, District of Columbia, United States •

Interdisciplinary evaluation expert, with PhD in Quantitative Research, Evaluation and Measurement in Education. MAs in Educational Policy and Leadership and Industrial-Organizational Psychology. Leadership roles (Principal Investigator, Lead SME) on federal contracts (DHHS-HRSA, Department of Defense). Nearly 20 years in program evaluation and research using collaborative, stakeholder-based frameworks with mixed methods to generate process and outcomes results. Developed several on-line surveys for measuring evidence-based practice outcomes, coalition factors, knowledge and skill gain, use of strategies, and needs assessments. Proficient in adult learning, on-line and in-person. Extensive creation of evaluation resources and trainings for use in public health and health settings. Published in peer-reviewed journals as single and lea...

No Ratings
Power your marketing with great writing. – Start your 30-day free trial today! GET STARTED

Other content marketing examples from Julianne R

Evaluating Evidence-Based Practice Behaviors in Team Contexts

Abstract: The article (fully accessible through attached link) explores the application of the Ha... Read More