Inizio contenuto principale del sito

  • Embeds

Explainable AI via Computational Argumentation: a seminar by Francesca Toni

Data pubblicazione: 22.06.2021
Image for ft2018_0.jpg
Back to Sant'Anna Magazine

Francesca Toni, Professor in Computational Logic and Royal Academy of Engineering/JP Morgan Research Chair on Argumentation-based Interactive Explainable AI at the Department of Computing, Imperial College London, UK will visit the Sant'Anna School and hold a seminar titled "Explainable AI via Computational Argumentation". Please join us for this event on Monday, June 28th, 3:00pm-4:30pm, in Aula Magna, Piazza Martiri della Libertà, 33 56100 Pisa. The seminar will also be available online through WebEx and YouTube. Below is the abstract of the talk and Francesca's short bio.

Abstract: Explainable AI (XAI) has been investigated for decades and, together with AI itself, has witnessed unprecedented growth in recent years, fuelled by data availability and computational power. Indeed, it is widely acknowledged that AI cannot fully benefit society without addressing its widespread inability to explain its outputs, causing human mistrust and doubts regarding regulatory compliance. Among various approaches to XAI, argumentative models have been advocated in both the AI and social science literature, as their dialectical nature appears to match some basic desirable features of the explanation activity.

Computational argumentation is a well-established paradigm in AI, at the intersection of knowledge representation and reasoning, computational linguistics and multi-agent systems. It is based on defining argumentation frameworks comprising sets of arguments and dialectical relations between them (e.g., of attack and, in addition or instead, of support), as well as so-called semantics (e.g., amounting to definitions of dialectically acceptable sets of arguments or of dialectical strength of arguments) with accompanying computational machinery. 

In this talk she will show how computational argumentation, combined with a variety of mechanisms for mining argumentation frameworks, can be used to support various forms of XAI.

Short bio: Francesca Toni is Professor in Computational Logic and Royal Academy of Engineering/JP Morgan Research Chair on Argumentation-based Interactive Explainable AI at the Department of Computing, Imperial College London, UK, and the founder and leader of the CLArg (Computational Logic and Argumentation) research group. Her research interests lie within the broad area of Knowledge Representation and Reasoning in AI and Explainable AI, and in particular include Argumentation, Argument Mining, Logic-Based Multi-Agent Systems, Non-monotonic/Default/Defeasible Reasoning, Machine Learning. She graduated, summa cum laude, in Computing at the University of Pisa, Italy, and received her PhD in Computing from Imperial College London. She has coordinated two EU projects, received funding from EPSRC and the EU, and awarded a Senior Research Fellowship from The Royal Academy of Engineering and the Leverhulme Trust. She is currently Technical Director of the ROAD2H EPSRC-funded project (www.road2h.org/) and co-Director for the Centres of Doctoral Training in Safe and Trusted AI and in AI for Healthcare. She has been a EurAI fellow since 2019, and has recenty been awarded an ERC Advanced grant on  Argumentation-based Deep Interactive eXplanations (ADIX). She has published over 200 papers, co-chaired ICLP2015 (the 31st International Conference on Logic Programming) and KR 2018 (the 16th Conference on Principles of Knowledge Representation and Reasoning), is corner editor on Argumentation for the Journal of Logic and Computation, in the editorial board of the Argument and Computation journal and the AI journal, and in the Board of Advisors for KR Inc. and Theory and Practice of Logic Programming.