3A1 Systems Thinking as Fertile Ground for Evaluation
Systems thinking is an approach and a suite of tools that helps to think and act in complexity. It invites us to examine issues in terms of systems, and to focus on the interdependencies. Systems thinking rejects fragmentation, focusses on relations, embraces contradictions and invites deeper, more transformative questions (and action).
This 15-minute presentation will propose systems thinking as a framework to enhance the scoping and planning of evaluations. In 2023-2024, I worked as an analyst at la Maison de l’innovation sociale in Montreal, a non-profit organization that puts systems thinking at the core of its work to improve communities. I will share with other evaluators some thoughts and references that can be part of an underpinning strategy to work in complexity and align evaluations with the greater good.
The presentation will cover:
- Briefly: what is Systems Thinking and some key concepts
- Ideas for applying systems thinking to evaluation scoping, questions development, and background research for planting the seeds of change
- Systems thinking skills that evaluators can develop and other references to consult
Marie-Philippe Lemoine, Evaluator, Agriculture and Agri-Food Canada
Marie-Philippe is a credentialed evaluator at Agriculture and Agri-Food Canada. She lives in Montreal and previously worked at La Maison de l’innovation sociale (la MIS) and with the private consulting firm Goss Gilroy Inc. She is partial to cookies, speculative fiction and unaided stargazing.
3A2 Applying Indigenous Approaches in Federal Evaluation: Challenges and Lessons Learned
As part of the federal commitments to advance reconciliation with Indigenous Peoples, the Evaluation Branch at Indigenous Services Canada seeks new ways to implement evaluation methods and approaches centered on Indigenous worldviews and knowledge systems. Using the ongoing evaluation of the Economic Development Capacity and Readiness Program as site of inquiry, this presentation explores a hybrid evaluation methodology that both adheres to Treasury Board of Canada’s Policy on Results and centers Indigenous ways of knowing and being.
This presentation will:
- Provide a foundational context for and understanding of the methodology, including how it blends Treasury Board Secretariat of Canada requirements with Indigenous approaches. This section will include examples such as: an overview of the foundational framework developed by the Indigenomics Institute; the role of an Opening Ceremony to “Bear Witness” to the launch of the evaluation; approaches used to ensure that Indigenous Peoples and communities had a guiding role in all phases of the evaluation; and emphasizing community-defined measures of success.
- Discuss challenges and lessons learned while implementing this framework within the bounds of the federal system, such as adhering to timelines and processes of the federal government, while respecting the timelines, processes, and wishes of communities; and honoring our mistakes and missteps.
- Present a reflective, “so what?” discussion, which will analyze how this pilot exemplifies ISC Evaluation’s objective to transform its practice, the perceived benefit that this innovative approach provided, and how the learnings from the pilot could be adapted for future work.
Alexis Gilmer, Senior Evaluation Officer, Indigenous Services Canada
Alexis Gilmer (she/her/elle) is a Senior Evaluation Officer at Indigenous Services Canada, where she has worked since September 2020. Prior to entering the public service, Alexis received her Master’s of Psychology from Wilfrid Laurier University where she was introduced to the world of program evaluation. She is honored to work and live on the traditional territory of the Anishinaabek Peoples now known as the Chippewa Tri-Council with her partner and elderly rescue dog.
3A3 Leveraging Outcome Harvesting and AI to Build More Effective Evaluation Plans
This presentation aims to demonstrate how an innovative evaluation approach like outcome harvesting can be paired with AI to create more effective and sound evaluation plans.
Outcome harvesting is a flexible evaluation method that captures and verifies actual outcomes, eliminating the need for a predefined logic model. By using AI, evaluators can quickly analyze large datasets, identify patterns, and automate time-consuming tasks, though data accuracy remains a key concern. Outcome harvesting provides a critical safeguard with its built-in data verification mechanisms, ensuring that any potential errors introduced by AI are caught and corrected. This makes it a secure approach to leverage AI in evaluation, allowing evaluators to rely on AI for efficiency while maintaining confidence in the accuracy of their results. Through the use of a case study, participants will gain an understanding of what outcome harvesting is, the steps in the process where AI can be effectively utilized, and how the results of the harvest can be used to create a more effective evaluation plan.
The session will provide real-world examples of applying outcome harvesting in program evaluations without a clear roadmap and demonstrate how AI can complement this by quickly identifying trends, managing large datasets, and synthesizing qualitative and quantitative data. By the end of the presentation, participants will have practical tools for integrating AI and outcome harvesting in a responsible, effective way, empowering them to create robust evaluation plans even when a program lacks a formal structure.
Chinyere Amadi Dufour, Evaluation Project Manager, National Research Council of Canada
Chinyere Aamadi Dufour is an Ottawa-based seasoned monitoring and evaluation technical expert with experience in the private, non-profit, and public sectors. She has professional experience across North America, the Caribbean, Africa, the Middle East, and Asia. Chinyere advises senior executives on performance priorities, results measurement, and adaptive management, and supports emerging evaluators in improving their skills.
She has a particular interest in designing results-based management tools, developing evaluation plans, and conducting MERL system audits. Chinyere holds a Master’s in Public Administration from Carleton University and a Bachelor’s (Honors) in International Politics and Economics from Middlebury College.
3A4 Harnessing AI in Program Evaluation: Navigating Innovation and Ethical Considerations
Artificial Intelligence (AI) feels almost magical, offering instant responses to any question. The excitement around AI and its potential uses is contagious. When used correctly, AI is a powerful tool. For example, we used Copilot (AI) to improve this abstract’s clarity. However, before using AI in evaluations, we need to understand its limitations.
This presentation will explore AI limitations and discuss how AI aligns with and challenges the core values of the Canadian Evaluation Society and of the Canadian Public Service. By exploring these aspects, we aim to provide a thorough understanding of AI’s role in improving program evaluation while upholding its core values.
This presentation will:
- Examine the Benefits and Limitations of AI: Provide a balanced view of how AI can enhance program evaluation while acknowledging its current limitations.
- Align AI with Core Evaluation Values: Discuss strategies to ensure AI applications uphold the values of the Canadian Evaluation Society and the Canadian Public Service.
- Introduce the FASTER Acronym: Present the FASTER framework as a guide for ethical AI use in evaluations.
- Promote Ethical Considerations: Highlight the importance of ethical considerations in integrating AI into evaluation practices.
- Encourage Dialogue and Innovation: Foster a conversation on the future of AI in evaluation, encouraging innovative yet responsible use of technology.
This presentation is particularly relevant to this conference as it addresses the intersection of emerging technologies and ethical evaluation practices, a key theme of our discussions.
Megan Vincent, Junior Evaluation Analyst, Public Service Commission
Megan Vincent is a junior evaluation analyst at the Public Service Commission with over 4 years of HR experience. Her cognitive psychology background fuels her interest in unconscious biases and their impact on decision-making. Passionate about accessibility, diversity, equity and inclusion, she uses data-driven analyses to develop evidence-based solutions.
Lys Granier, Program Evaluator, Public Service Commission
Lys Granier is a program evaluator at the Public Service Commission, where she supports evidence-based decision-making with passion. She has a background in social sciences research, focusing on social exclusion, gentrification, and housing issues. Her work as researcher informed policymakers and the public about inequalities, advocating for evidence-based solutions. She has a special interest in moral philosophy, which percolates in her work.