Antonis Kakas

antonis_kakas.png

EXPLAINABLE AI IN

DIGITAL HEALTH

“Explainable AI working along with human professionals can deliver new levels of medical care”

Antonis C. Kakas is a Professor at the Department of Computer Science of the University of Cyprus. He studied Mathematics at Imperial College, London and then obtained his Ph.D. in Theoretical Physics from the same college in 1984. His interest in Computing and AI started in 1989 under the group of Professor Kowalski. Since then, his research has concentrated on Computational Logic in AI with particular interest in Argumentation, Abductive and inductive Reasoning and their application to Machine Learning and Cognitive Systems. He has published over 100 papers with his work widely cited. He has co-developed at the UCY several research AI systems, including the argumentation system of GORGIAS which has been applied to a wide range of problems in multi-agent systems and cognitive agents, particularly in the area of digital health. With others he has proposed, Argumentation Logic, as a logic that offers a new perspective for logic-based Explainable AI. Currently, he is working on the development of a new framework of Cognitive Programming that aims to offer an environment for developing Human-centric AI systems that can be naturally used by developers and human users at large. With his students and collaborators, he is developing two new systems, COGNICA for Cognitive Argumentation and ArgEML for Explainable Machine Learning, to support the framework of Cognitive Programming and its application to medical decision support. He has served as the National Contact Point for Cyprus in the flagship EU project on AI, AI4EU.

Professor (Emeritus) Antonis C. Kakas

Department of Computer Science

University of Cyprus

antonis-AT-ucy.ac.cy; (+357) 22 892706

Argumentation-based Explainable Machine Learning (ArgEML)

We are developing a framework for explainable machine learning that uses argumentation as the target language and notion of prediction for learning. This framework, called ArgEML, integrates sub-symbolic with logical methods of symbolic AI argumentation to provide explainable solutions to learning problems. Predictions are accompanied with detailed explanations to enable the human professional to take more informed decisions. Furthermore, this natural explainability element of the argumentation-based approach can help us structure and partition the learning problem space into meaningful subspaces.

image1_kakas.png

The approach has been applied on several real-life medical problems, such as for the Asymptomatic Carotid Stenosis and Risk of Stroke prediction, the Endometrial Cancer Detection and Intracranial aneurysm rupture risk prediction.

 

Human-centric AI (HCAI) for Medical Support

Human-centric artificial intelligence (HCAI) aims to provide support systems that can act as peer companions to an expert in a specific domain, by simulating their way of thinking and decision-making in solving real-life problems. We are developing a methodology for building such systems in the medical domain. The gynaecological artificial intelligence diagnostics (GAID) assistant is such a system.

image2_kakas.png

Based on artificial intelligence (AI) argumentation technology, it is developed to incorporate, as much as possible, a complete representation of the medical knowledge in gynaecology and to become a real-life tool that will practically enhance the quality of healthcare services and reduce stress for the clinician. The system provides explanations supporting a diagnosis against other possible diseases, allowing for a learning process of modular improvement of the system through feedback on the diagnostic discrepancies between the system and a specialist.

GAID has been tested on real-life examples of known past clinical visits with reasonable levels of accuracy in comparison with a senior doctor.

image3_kakas.png

The Ophthalmological system is an Explainable AI triage system whose aim is to decide on the possible severity of a patient’s eye symptoms so that the patient can be given the appropriate priority of seeing the doctor. The system takes as input the initial symptoms selected by the patient or a nurse and then depending on their completeness with respect to making a good prediction of the possible disease, the system asks for further relevant information from the patient.

image4_kakas.png

The output of the system is a suggestion of possible diseases accompanied with an explanation of why each of these is possible.

image5_kakas.png

 

SELECTED GRANTS

  1. Integrated Explainable AI (IXAI) for Medical Decision Support, ARGEML. Call University of Cyprus Internal Projects. March 2021- March 2023. Funding 37,000 Euro.
  2. Multimodal 3D Holographic tool and real-time Guidance System with point-of-care diagnostics for surgical planning and interventions on liver and pancreatic cancers (HoloSurge). HORIZON-HLTH-2023-TOOL-05 (Tools and technologies for a healthy society (Single stage - 2023)), Topic: HORIZON-HLTH-2023-TOOL-05-05, Type of Action: HORIZON-IA. Proposal number: 101137233, Jan. 2024 – Dec. 2027, Funding 352,500 Euro
  3. VELES Excellence Hub - Strengthening the South-East Europe Smart Health Regional Excellence and Boosting the Innovation Potential (VELES). HORIZON-WIDERA-2022-ACCESS-04 (Excellence Hubs), Topic: HORIZON-WIDERA-2022-ACCESS-04-01, Type of Action: HORIZON-CSA. Proposal number: 101087483, Jun. 2023 – May. 2027, Funding 262,500 Euro
  4. Hospital Transformation through Artificial Intelligence HospAItal), CODEVELOP-ICT-HEALTH/0322/0071, Research and Innovation Foundation, Cyprus, Oct. 2022 – Sep. 2024, Funding 250,000 Euro
  5. Identification of unstable carotid plaques associated with symptoms using ultrasonic image analysis and plaque motion analysis (AtheroRisk). Research and Innovation Foundation, RESTART 2016-2020 Call, EXCELLENCE/0421/0292, Mar. 2022 – Aug. 2024, Funding 250,000 Euro

SELECTED PUBLICATIONS

  1. Emmanuelle Dietz, A. C. Kakas, Loizos Michael, Argumentation: A calculus for Human-Centric AI. Frontiers Artificial. Intelligence 5, 2022.
  2. Nicoletta Prentzas, Athena Gavrielidou, Marios Neofytou, A. C. Kakas, Argumentation-based Explainable Machine Learning (ArgEML): a Real-life Use Case on Gynecological Cancer. ArgML@COMMA, pp 1-13, 2022.
  3. Nikolaos I. Spanoudakis, Georgios Gligoris, Adamos Koumi and A. C. Kakas, Explainable argumentation as a service. J. Web Semantics. 76: 100772, 2023.
  4. Nicoletta Prentzas, Constantinos S. Pattichis, and A. C. Kakas, Explainable Machine Learning via Argumentation. xAI (3): 371-398, 2023.
  5. Chara Skouteli, Nicoletta Prenzas, A. C. Kakas, and C. S. Pattichis, Explainable AI modeling in the prediction of cardiovascular disease risk. In Volume 316: Digital Health and Informatics Innovations for Sustainable Health Care Systems, pp 978-982, 2024.