XTAI 2022

AISB Workshop on Explainability and Transparency in AI

Swansea University, Bay Campus, Computational Foundry (and online)
13 October 2022

AISB and the Department of Computer Science at Swansea University are holding a hybrid one-day workshop on explainability and transparency in AI. This workshop is free to attend for AISB members but requires registration.

Topics include:

  • Explainable AI
  • Argumentation Theory
  • AI Transparency
  • Responsible, Reliable, and Resilient Design of AI Systems

The workshop will have invited and contributed talks as well as discussion sessions aimed at exploring cutting-edge research and open problems in areas related to explainable AI and transparency of automated data-driven decision making.

Programme:

10:00: Opening
10:05: Hsuan Fu – Explainability and Predictability of Machine Learning Methods in Finance Applications
10:35: Raghav Kovvuri – Investigating Global Open-Ended Funds diversification among G 11 countries through XAI
11:00: Break
11:30: Xiyui Fan – XAI with Probabilistic Structured Argumentation
12:00: Adam Wyner – Values in the Roots of Conflict in Argumentation
12:30: Lunch break

14:00: Arnold Beckmann – Hybrid AI approaches for Knowledge-Intense Manufacturing
14:30: Jay Morgan – Adaptive Neighbourhoods for the Discovery of Adversarial Examples

15:00: Coffee
15:30: Jamie Duell – Enhancing the Explainability of Electronic Health Record Predictions
16:00: Bertie Müller – PACE (Parametrised Automated Counterfactual Explanations) for Re3 (Reliable, Responsible & Resilient Design)
16:30: Panel Discussion

Organisers:

  • Monika Seisenberger (Swansea University)
  • Bertie Müller (AISB & Swansea University)

Email xtai@aiqt.uk for any other queries.

Abstracts:

Jamie Duell: Enhancing the Explainability of Electronic Health Record Predictions
Artificial Intelligence is ubiquitous in many high-impact domains,such as, healthcare, finance and law. Electronic Health Records (EHR) are a common form of data representation in healthcare, where the popular tree ensemble and deep learning approaches lack interpretability. Due to the versatility of approaches, model-agnostic methods attempt to disseminate feature explanations to the user by approximating the black-box model. With each patient being unique, we see the introduction of instance-based explanations, namely local explanations. As such, in an attempt to replicate the black-box model, we introduce Polynomial Adaptive Local Explanations, aiming to provide local explanations that adapt to each patient to provide patient specificity. Supporting this, due to the prominence of missing data in EHRs, we introduce a method of imputation and associative properties that hold with respect to explanations when we impute data. Conclusively, the combination of such methods better enhance the explainability of EHR, from both the quality of data provided and XAI methods.

Xiyui Fan: XAI with Probabilistic Structured Argumentation
Argumentation-based XAI has been studied in the literature. In this talk, I will present some of my own works in this area. The main approach is a probabilistic structured argumentation framework built with probabilistic rules. I will discuss how such formalism can support argumentative reasoning for interpretability while allowing efficient numerical calculation with parallelisation.

Raghav Kovvuri: Investigating Global Open-Ended Funds diversification among G 11 countries through XAI
Open-Ended funds are run by asset managers to diversify the funds pooled through the investment. In doing so, the specific risks associated with pooled funds can be mitigated. In this research, we use an eXplainable Artificial Intelligence (XAI) feature attribution algorithm to quantify the strategy of diversification based on its effect on the corresponding Net Asset Value (NAV). To do this, we collected data from the Morning Star Direct software database containing 313,737 unique funds and their fund allocation from December 2000 to November 2021 with a total of 21 Years as month frequency across G11 countries. The preliminary results using the funds originating from the USA, UK and Canada across G11 countries show that the important features using Shapley Additive eXplanation (SHAP) are “Stock Index” and “Funds Performance” with respect to previous quarters have a high influence towards the dynamics of NAV.

Jay Morgan: Adaptive Neighbourhoods for the Discovery of Adversarial Examples
Machine Learning, in particular Deep Learning, has most recently provided the state-of-the-art results for many tasks such as object recognition, text-to-speech processing, and credit-card fraud detection. In many cases, Deep Learning has even out-performed human performance on these very same tasks. Despite this advance in performance, however, the existence of so-called adversarial examples is well known within the community. These adversarial examples are the metaphorical ‘blind-spot’ of Deep Learning models, where very small (often human-imperceptible) changes to model’s input can result catastrophic miss-classifications. These adversarial examples then pose a great safety risk, especially in systems where safety is critical such as fully-automotive vehicles.

To defend against and attempt to eradicate the existence of adversarial examples in Deep Learning models, principle works have sought to search for their existence within fixed-sized regions around training points, and use the found adversarial examples as a criterion for learning. These works have demonstrated how the robustness of Deep Learning models against adversarial examples improves through these training regimes.

Our work means to compliment and improve on these existing approaches by adapting the size of the searchable regions around training points, based upon the complexity of the problem and data sampling density. The result is each training point has an adapted region around it to which adversarial examples can be searched for and found.

We demonstrate how, through the development of uniquely-adaptive searchable regions, existing methods can help to further improve the robustness of Deep Learning models, and also make the existing methods applicable to non-image related tasks by providing an upper bound for discovering adversarial examples.

In this presentation, we will explore how adversarial examples can be determined through the use of existing approaches. Further to these approaches, how our method allows us to generate unique and adapted region sizes for all training points in a dataset.

Adam Wyner: Values in the Roots of Conflict in Argumentation
In computational argumentation, values are presented as the means to adjudicate conflict between arguments: values are posited to hold of an argument as a whole rather than of any constituent parts; preference rankings between values help to determine a winning argument. In this paper, we propose a novel, formal approach to a knowledge base for argumentation, leading to Value-based Instantiated Argumentation (VIA). It accounts for motivated reasoning, which is the observation that an agent constructs (instantiated) arguments relative to the agent’s interests selectively out of the set of all available propositions. Here, interests are an agent’s values. As such, the arguments reflect an agent’s values. Given this, conflicts between arguments are fundamentally grounded in conflicts between the values associated with the constituent propositions rather than with the arguments as a whole, which contrast with much current work where values are used to adjudicate amongst arguments themselves. In addition to showing how an agent’s knowledge base is constructed relative to the agent’s values, we state and prove several novel claims.