Buch, Englisch, 370 Seiten, Format (B × H): 152 mm x 229 mm
Using GenAI in Evaluation Practice
Buch, Englisch, 370 Seiten, Format (B × H): 152 mm x 229 mm
Reihe: Comparative Policy Evaluation
ISBN: 978-1-041-35748-3
Verlag: Taylor & Francis Ltd
From Algorithms to Evidence: Using GenAI in Evaluation Practice offers a timely, practice-grounded guide for evaluators and development professionals navigating the fast-moving world of generative AI. Building on the foundations laid in Artificial Intelligence and Evaluation (2025), this volume moves decisively from theory to application. It documents how evaluators across the globe are already using GenAI in real projects, showing not only what worked, but also what failed, why, and under what conditions. The result is a clear, practitioner-centered resource that cuts through hype and provides grounded, real-world insight.
Structured around the seven phases of the evaluation lifecycle (design, structuring and inception, data collection, data analysis, reporting, judgment, and utilization) the book offers concrete examples of how GenAI is being integrated into everyday evaluation tasks. Contributors explain their rationale for using AI tools, the steps they took, and the results they achieved. Crosscutting chapters synthesize lessons on methodological adaptation, evolving evaluator competencies, and the ethical and professional standards needed to use GenAI responsibly. Throughout, the volume emphasizes “hybrid intelligence,” showing how human expertise and AI-enabled methods can work together to strengthen evaluative reasoning.
Clear, accessible, and grounded in real practice, From Algorithms to Evidence fills a critical gap in the literature. It provides evaluators, policymakers, and organizational leaders with practical guidance for building more adaptive, data-informed, and future-ready evaluation systems, while keeping equity, transparency, and human judgment at the center of their work.
Zielgruppe
Postgraduate and Undergraduate Advanced
Autoren/Hrsg.
Fachgebiete
Weitere Infos & Material
Foreword. 1. The Transformation of Evaluation Practice: From Algorithms to Evidence 2. Threading the Needle: Making Evaluation Affordable and Participatory for Grassroots Nonprofits 3. Scoping evaluation with GenAI: A case study of the partnership principle in Poland 4. From Supporting Evaluators to Evaluating AI: A Human-Centered Approach to Developing EvalAssist, an AI-for-Good Augmentation Tool for Evaluation Design 5. Designing with Generative AI: Using GPT-4 to Build a Monitoring, Evaluation and Learning Framework 6. From Ritual to Reflex: Reimagining Results Measurement with Generative AI 7. Playing to Plan: Using a GenAI-Powered Chatbot Game to Support Logic Model Development in a City Government Context 8. Harnessing Generative AI for Data Structuring and Geospatial Analysis in Evaluation 9. Balancing Innovation and Rigor at World Bank Independent Evaluation Group: Thoughtful Integration of Artificial Intelligence for Evaluation 10. Integrating Generative AI into Thematic Analysis: A Structured Approach for Large-Scale Document Review in Evaluation 11. Bridging Complexity with Intelligence: The Role of Generative AI in a Mixed-Methods Evaluation of SDG1 in Sub-Saharan Africa 12. Using AI for Interviewing in a Challenging Context 13. How to Train a Work Horse. Using Generative AI for Labelling Data 14. Sifting for Gold: Mining Relevant Content in a Large Document Collection 15. Harnessing Generative AI for Geospatial Analysis in Environmental Evaluations 16. Automating Partnership Principle Assessment: Using Generative AI to Analyze Stakeholder Participation in European Operational Program Governance 17. Elevating Local Voices: Qualitative Analysis Reimagined 18. Generative AI for Protocol-Dirven Structured Analysis in Evaluations 19. Experimenting with AI Tools in CGIAR Science Group Evaluations: A Candid Assessment of Adoption by Learning 20. AIDA - Assessment for Development Analytics 21. ChatGPT x GBA+ - Using GenAI to Apply Evaluative Criteria: Assessing Reports Against Canada's GBA+ Framework 22. Human in the Loop: AI-supported Evaluation in Hungary 23. Reviewer 0: Using Generative AI as the Evaluator’s First Line of Feedback 24. Beyond the Radar: Addressing Undeclared AI Use in Evaluation Practice 25. Enhancing Evaluation Dissemination & Reporting with Generative AI 26. Leveraging Self-Hosted AI Solutions for Data-Secured Research and Evaluation 27. Leveraging Generative AI to Transform Evaluation Reports into Engaging Multi-Media Products 28. Podcasting Findings. Using Generative Artificial Intelligence to Tailor Dissemination to Diverse Stakeholders 29. Bridging Evaluation and Official Statistics Through GenAI: The EQUA Platform 30. Where Methodological Automation Ends and Judgement Begins: Evaluation in the Generative Artificial Intelligence Era 31. Artificial Intelligence in Evaluation Practice: Uses, Good Practice, and Emerging Competencies 32. Artificial Intelligence and the Ethics and Practice of Evaluation 33. FRAME: A Practical Framework for Responsible AI in Monitoring and Evaluation 34. GenAI and the Evolution of Evidence in Evaluation




