1A1. Building Organizational-level Evaluation and Performance Measurement Capacity: Reflections on a Toolkit Approach
Organizational evaluation and performance measurement strategies can help clarify, standardize, and strengthen how organizations assess their progress and performance. By adopting such strategies, organizations can establish a shared vision, common language, and cohesive work plan for improvement. To support this goal, a toolkit was developed for evaluators in 2023 and first piloted at the Canadian Evaluation Society Conference that same year. In 2025, the toolkit was adapted for performance measurement and results-based management (RBM) specialists and piloted at the Performance and Planning Exchange Symposium. The toolkit was designed to: i) help practitioners assess their organization’s readiness to implement an organizational evaluation or performance measurement strategy; and, ii) guide them in developing an action plan tailored to their organization’s context, strategic priorities, and their specific role. Its broader aim was to help evaluators generate practical ideas for strengthening evaluation and performance measurement capacity within their organizations.
This presentation highlights findings from the most recent pilot assessing the toolkit’s usefulness and impact. It also dedicates space to reflecting on the future of organizational-level capacity building in evaluation and performance measurement.
Karolina Kaminska, PhD Candidate, University of Waterloo
Karolina Kaminska is a PhD candidate in Public Health Sciences, Health Evaluation at the University of Waterloo. She obtained her Bachelor of Health Sciences and MSc in Health Systems at the University of Ottawa and worked at various national and international non-profit and governmental organizations, including the World Health Organization, the Centre for Addiction and Mental Health, the Canadian Foundation for Healthcare Improvement, the Canadian Centre on Substance Use and Addiction, and the Department of Fisheries and Oceans, specializing in qualitative and quantitative research, analysis, and evaluation design, approaches, and methods. Her current doctoral research focuses on improving identification of adolescents with mental health and substance use issues by leveraging system-level evaluation, analysis, and machine learning.
1A2. From Silos to Learning: Reimagining Evaluation Use in the Federal Public Service
Federal evaluation functions are designed to strengthen accountability, learning, and evidence-informed decision-making, yet evaluation findings continue to be underutilized by program managers and executives (Bourgeois & Whynot, 2018). This presentation draws on an analytical framework developed in the presenter’s recent academic work, which applies Gareth Morgan’s mechanistic and political metaphors, Edgar Schein’s cultural model, Michael Quinn Patton’s Utilization-Focused Evaluation, and John Mayne’s concept of evaluative culture to explain the persistent underuse of evaluation findings in the Canadian federal government.
By exploring how bureaucratic structure, cultural assumptions, and political dynamics shape evaluation behaviour, the presentation highlights why evaluation often becomes a compliance exercise rather than a tool for continuous improvement. The session will offer a compact, high-level synthesis of three key insights:
Structure: How mechanistic bureaucracies create silos and unintentionally misalign evaluator and manager roles.
Culture: How underlying assumptions—not just espoused values—drive resistance to evaluation and limit learning.
Politics: How power dynamics and competing interests can be reframed as opportunities for negotiation, co-determination, and increased evaluation use.
The presentation proposes a path forward rooted in courageous leadership, evaluative thinking, and intentional relationship-building between program and evaluation teams. It emphasizes how organizations can shift from a culture of compliance to a culture of learning—one where evaluation is embraced as a strategic asset rather than an oversight requirement.
Mahmoud Rahim, Graduate Student, University of Ottawa
Mahmoud Rahim is a graduate student in evaluation at the University of Ottawa and an evaluator in the Federal Public Service. His academic work focuses on evaluation use, organizational learning, and evidence-informed decision-making within government settings. Drawing on research in evaluation theory, organizational culture, and public administration, Mahmoud analyzes why evaluation findings are often underutilized and what organizational conditions promote meaningful uptake. He has contributed to evaluation projects across federal programs and currently sits on the Board of the Canadian Evaluation Society Educational Fund. His work emphasizes strengthening collaboration between evaluators and program teams and cultivating cultures of learning across the public service.
1A3. Collaborative Sense-making: What Data Parties Bring to Federal Government Evaluations
Evaluations can produce solid data, but engagement from stakeholders can be weak or limited. Recommendations are made, but stakeholders lack the context in which the findings arise from. Data parties offer a solution. They are semi-structured sessions where both evaluators and stakeholders review the emerging evidence and findings to collectively analyze the data.
This presentation will explain what data parties are, why they work and how to run them effectively (with real world examples). Instead of evaluators interpreting results in isolation, data parties bring program staff, managers, and subject-matter experts into the interpretation process. This reduces misinterpretation, surfaces context the evaluation team may have missed, and builds early buy-in for results and recommendations.
Well-run data parties can sharpen findings, identify gaps early, and increase the likelihood that recommendations are actually implemented. Attendees will leave with a simple, repeatable approach they can apply in their own evaluations.
Brett Matsushita, Evaluator, Agriculture and Agri-food Canada
Brett Matsushita is an emerging evaluator at Agriculture and Agri-food Canada.