Skip to the content.

Optimization-Aware Interpretability: Hands-On Methods for Explainable Automated Machine Learning Systems

Abstract

Explainable Artificial Intelligence (XAI) has become a central challenge in modern machine learning, as AI systems are increasingly deployed in domains where interpretability and trust are essential. Yet most learning pipelines, including those in automated machine learning (AutoML), optimize only for predictive performance, leaving explanation quality and stability as afterthoughts. Recent work in evolutionary computation and XAI has identified the need for optimization frameworks that explicitly account for interpretability and transparency.

This tutorial introduces participants to optimization-aware interpretability, a perspective that treats explanation quality as a measurable objective within model search. Using case studies from AutoML and interpretability methods such as SHapley Additive exPlanations (SHAP), we demonstrate how to quantify explanation consistency across retraining, define trust-aligned metrics, and incorporate them into evolutionary and metaheuristic optimization loops. The tutorial blends conceptual discussion with practical demonstrations, offering both theoretical grounding and applied insight into building more transparent and reproducible AI systems.

Learning objectives

Participants will learn to:

Expected outcomes

By the end of the tutorial, participants will have:

Presenters

Dania Tamayo-Vera

Dr. Dania Tamayo-Vera, University of Prince Edward Island, Canada
Assistant Professor and researcher in optimization-aware explainable AI. Her work integrates trust metrics into evolutionary and AutoML systems, bridging optimization theory with model interpretability. She has over five years of industry experience in applied machine learning and control systems and has co-authored several papers in optimization and evolutionary computation.

Antonio Bolufe-Röhler

Dr. Antonio Bolufe-Röhler, University of Prince Edward Island, Canada
Associate Professor specializing in evolutionary computation, metaheuristics, and hybrid optimization. His research focuses on adaptive algorithms, search analysis, and multi-objective optimization. He has authored numerous IEEE publications and serves as an active member of the Computational Intelligence Society, contributing to the advancement of experimental evolutionary algorithms.

Contact

Dania Tamayo-Vera: dtamayovera.email@upei.ca
Antonio Bolufe-Röhler: aboluferohlerl@upei.ca