Optimization-Aware Interpretability: Hands-On Methods for Explainable Automated Machine Learning Systems
Abstract
Explainable Artificial Intelligence (XAI) has become a central challenge in modern machine learning, as AI systems are increasingly deployed in domains where interpretability and trust are essential. Yet most learning pipelines, including those in automated machine learning (AutoML), optimize only for predictive performance, leaving explanation quality and stability as afterthoughts. Recent work in evolutionary computation and XAI has identified the need for optimization frameworks that explicitly account for interpretability and transparency.
This tutorial introduces participants to optimization-aware interpretability, a perspective that treats explanation quality as a measurable objective within model search. Using case studies from AutoML and interpretability methods such as SHapley Additive exPlanations (SHAP), we demonstrate how to quantify explanation consistency across retraining, define trust-aligned metrics, and incorporate them into evolutionary and metaheuristic optimization loops. The tutorial blends conceptual discussion with practical demonstrations, offering both theoretical grounding and applied insight into building more transparent and reproducible AI systems.
Learning objectives
Participants will learn to:
- understand how optimization drives explainable AI and why current AutoML systems prioritize accuracy over interpretability
- identify core explainability methods and their reproducibility challenges
- quantify and visualize explanation consistency through retraining-based metrics
- incorporate interpretability and trust metrics as secondary objectives in optimization workflows
- design small-scale experiments to study how exploration exploitation dynamics affect model transparency
Expected outcomes
By the end of the tutorial, participants will have:
- ready-to-use code templates for explanation stability and optimization-aware model selection
- a practical framework for embedding interpretability as an optimization objective
- guidelines for reproducible, transparent experimental design in explainable AI
Presenters
Dr. Dania Tamayo-Vera, University of Prince Edward Island, Canada
Assistant Professor and researcher in optimization-aware explainable AI. Her work integrates trust metrics into evolutionary and AutoML systems, bridging optimization theory with model interpretability. She has over five years of industry experience in applied machine learning and control systems and has co-authored several papers in optimization and evolutionary computation.

Dr. Antonio Bolufe-Röhler, University of Prince Edward Island, Canada
Associate Professor specializing in evolutionary computation, metaheuristics, and hybrid optimization. His research focuses on adaptive algorithms, search analysis, and multi-objective optimization. He has authored numerous IEEE publications and serves as an active member of the Computational Intelligence Society, contributing to the advancement of experimental evolutionary algorithms.
Contact
Dania Tamayo-Vera: dtamayovera.email@upei.ca
Antonio Bolufe-Röhler: aboluferohlerl@upei.ca