Explainable and Responsible AI (XRAI)
"With great power comes great responsibility" is not just a quote from a fictional universe but applies equally well to our AI-pervaded reality. Successful applications of AI are countless, ranging from recommending upcoming films matching our taste, or suggesting who our next potential date might be, to guiding the decisions of firms in the hiring process or diagnosing diseases and suggesting the best course of actions to doctors.
Whether we are aware of it or not, such applications are shaping our lives, but this powerful capability is not balanced by the necessary oversight. What is worse, AI is not immune from bias, possibly inherited from the data used for training, and its decision process is often not transparent enough.
At the XRAI research group we are interested in promoting an ethical use of AI that necessarily presupposes keeping the human in the loop. Our research focuses in particular on the fields of algorithmic impact assessment, and explainability for the medical domain.
Algorithmic impact assessment
Every decision with the potential of producing a significant impact on people's lives must be accountable. This is even more true nowadays, when intelligent systems are being used by public agencies to optimise their processes, or to ensure fair access to their services.
It is thus crucial for organisations to be able to conduct self-assessment of their own automated decision systems, in order to evaluate their potential impact on fairness, justice, or bias.
Our research aims to consider the increasing concerns about the ethical implications and unintended consequences of AI implementations when used in the context of public or private organisations. We focus on developing tools for documenting and tracing the decision process of an AI algorithm in order to foster transparency in the entire chain.
Interpretable ml for the medical domain
The necessity for explainability in AI, intended as the possibility for a human user to make sense of the underlying decision process of an intelligent agent dates back to the initial expert systems of the 70's. Since then, methods for automated learning have been applied to the most diverse domains, often achieving impressive performances especially since the advent of deep learning.
When high stakes decisions are to be taken based on the outcome of an algorithm, relying on common accuracy metrics is no longer sufficient. This is the case for the medical domain: predictions potentially affecting human lives require a high level of accountability and transparency. Unfortunately, many machine learning algorithms are still poorly understood by non-specialists, mostly because of their inherent black-box nature.
Our research aims to investigate methods for providing human-understandable and reliable explanations of machine learning models so as to effectively foster an expert-in-the-loop approach. In particular, we focus on the medical domain, where complex ML have so far found limited practical utility, due to their inherent black-box nature.
Please get in touch if you want to learn more about our ongoing research, our current projects or open positions in our lab.
Contact
Dr Baidaa Al-Bander
Lecturer
- Colin Reeves Building, CR004
- b.al-bander@keele.ac.uk
Dr Marco Ortolani
Senior Lecturer
Computer Science
- Colin Reeves, CR102
- +44 1782 7 33264
- m.ortolani@keele.ac.uk