Asking a computer to combine facts and values with evidence and standards to make a judgment or determine value... it's just not there yet.
Dr Bianca Montrosse-Moorhead
Professor of Evaluation, University of Connecticut
About This Episode
What does it mean to be an evaluator, and how do we think about our professional identity in a rapidly changing field? In this episode, Dr Bianca Montrosse-Moorhead helps us look beneath the surface of evaluation practice. We explore the classic fox and hedgehog metaphor and what it reveals about how evaluators operate, why our tendencies matter, and how artificial intelligence is reshaping the judgments we make and the competencies we need.
Key Takeaways
-
1
The fox and hedgehog metaphor offers a lens for evaluator identity
Hedgehogs burrow deep into one area while foxes move across contexts and bring many things together. Evaluation thrives when we have both orientations, and recognising your tendency can help you understand friction or fit in your work.
-
2
AI will not make evaluators redundant any time soon
Humans remain essential for interpretation, evaluative reasoning, and making judgments that combine facts, values, evidence and standards. The technology is not yet capable of the interpretive and values-based work that sits at the heart of evaluation.
-
3
New competencies are required for working with AI
Evaluators need to develop skills in prompt engineering, understand when AI is fit for purpose, and navigate new ethical considerations around consent, transparency, and participant data.
-
4
Simple language gets us to the heart of the work
Evaluation loves to name and coin new things, but the definition should be fit for context. Sometimes "helping people understand the value of what they're doing" is more powerful than academic terminology.
-
5
AI introduces ethical considerations beyond the obvious
These include environmental impact from energy and water use, bias amplification from training data, and the question of whether participants should consent to AI being used on their data, even when commissioners approve it.
Topics Covered
Resources Mentioned
- Evaluation Foundations Revisited: Cultivating a Life of the Mindful Practitioner by Thomas A Schwandt
- Evaluation Essentials: From A to Z by Marvin C Alkin, Anne T Vo and Christina A Christie
- Core Concepts in Evaluation: Classic Writings and Contemporary Commentary edited by Lori Wingate, Ayesha Boyce, Lyssa Wilson Becho and Kelly Robertson
- Evaluation Criteria for Artificial Intelligence by Bianca Montrosse-Moorhead
- Using Generative AI in Evaluation Practice edited by Carrie Bruce, Valentine Gandhi and Stephan Bony (forthcoming, open access)
About Bianca
Dr Bianca Montrosse-Moorhead
Professor of Research Methods, Measurement, and Evaluation, University of Connecticut
Bianca trains the next generation of evaluators through doctoral, master's and graduate certificate programmes at the University of Connecticut. She maintains an active evaluation practice alongside her academic work and has contributed significantly to thinking around evaluation competencies, methodology, and the intersection of AI with evaluation practice. She completed her MA and PhD at Claremont Graduate University, where she learned from and worked with Tina Christie, Stuart Donaldson, Michael Scriven and Hallie Preskill.
Enjoying It Depends?
Subscribe so you never miss an episode.
Listen on your favourite platform
Get updates in your inbox
Connect with us
Follow First Person Consulting