Skip to the content.

Bridging Evolutionary Optimization and Explainable Artificial Intelligence

IEEE WCCI 2026 Workshop. Hybrid track for CEC / IJCNN / FUZZ.

Motivation and scope

Explainable Artificial Intelligence (XAI) has become one of the most pressing challenges in modern machine learning. As AI systems are increasingly deployed in scientific, industrial, and societal domains, understanding and trusting their decisions is essential. Yet at its core, machine learning is an optimization process: model selection, hyperparameter tuning, and architecture search all focus on maximizing predictive performance. Current approaches rarely treat interpretability as part of this process.

This workshop opens discussion on how explainability can be framed as a measurable objective within learning optimization frameworks, and how experimental design can reveal the dynamics behind such trade offs. Building on decades of work in multi objective evolutionary search, we explore optimization aware interpretability, where trust and transparency are embedded directly into model search. The goal is to establish a shared research agenda where explainability emerges from the search itself, rather than being added afterward.

Agenda

  1. Opening and framing (Dania Tamayo-Vera)
    Introduction and overview: If learning is an optimization process, why is not explainability part of what we optimize?

  2. Keynotes
    • Dania Tamayo-Vera, University of Prince Edward Island: Optimizing for Understanding: Reframing Explainability as a Search Problem.
    • Stephen Chen, York University: Experimental Computational Intelligence: Metrics and Methodologies for Understanding Optimization Behavior.
    • James Montgomery, University of Tasmania: Evolutionary Optimization in Complex Systems: Lessons from Environmental and Agricultural Modeling.
  3. Interactive group design session
    Mixed background groups will explore practical challenges such as defining measurable objectives for explanation quality, designing search strategies that balance simplicity and interpretability, and examining trade offs among accuracy, complexity, and human understanding. Each group will outline a short optimization for explainability concept and share it with the audience.

  4. Panel discussion Focused discussion on experimental approaches for analyzing algorithmic behavior in evolutionary optimization. Central question: How can experimental analysis of search dynamics guide the design of transparent and trustworthy optimization processes?

  5. Closing reflection
    Synthesis of key ideas and next steps for embedding explainability within optimization frameworks across computational intelligence domains.

Presenters

Stephen Chen

Dr. Stephen Chen, York University, Canada Specialist in Experimental Computational Intelligence, focusing on designing novel experiments and metrics to understand optimization methods and processes.

James Montgomery

Dr. James Montgomery, University of Tasmania, Australia Researcher in optimization and heuristic search, focusing on the design of evolutionary algorithms and their applications in environmental and agricultural modeling.

Dania Tamayo-Vera

Dr. Dania Tamayo-Vera, University of Prince Edward Island, Canada Researcher in optimization aware explainable AI and model selection, focusing on integrating trust metrics into evolutionary search.