As more and more tasks and decisions are delegated to AI-enabled computers, mobile devices, and autonomous systems, it is crucial to understand the impacts this may have on people and that AI treats people ethically. Among other topics, we are working on:
1) Value-based and explainable AI, where we are developing AI models that are able to reason about human values, so that AI models act according to them. We also work on making AI models more transparent and explainable, so that users can better understand what they do and why. We have already proven that, in some specific recommendation domains, making AI value-aligned and explainable leads to more accpetable and satisfying recommendations. This also allows for a better way to scrutinise AI models in general.
2) AI Discrimination, where users may be treated unfairly or just differently based on their personal characteristics (e.g. gender, ethnicity, religion, etc.). Here, we work both on studying where AI biases may lead to discrimination, as well as on methods to make AI fairer.
Our research in this domain often involves cross-disciplinary collaborations, including colleagues from the social sciences, digital humanities, law, ethics and policy/governance.
(missing reference)
Related Projects
Discovering and Attesting Digital Discrimination (EPSRC) - DADD
National Research Centre on Privacy, Harm Reduction and Adversarial Influence Online (UKRI) - REPHRAIN
Selected Publications
IJCAI
The Role of Perception, Acceptance, and Cognition in the Usefulness of Robot Explanations
Hana Kopecka, Jose Such, and Michael Luck
In Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence (IJCAI), 2024
@inproceedings{kopecka2024role,author={Kopecka, Hana and Such, Jose and Luck, Michael},title={The Role of Perception, Acceptance, and Cognition in the Usefulness
of Robot Explanations},booktitle={Proceedings of the Thirty-Third International Joint Conference on
Artificial Intelligence {(IJCAI)}},pages={7868--7876},publisher={ijcai.org},year={2024},}
CSCW
Preferences for AI Explanations Based on Cognitive Style and Socio-Cultural Factors
Hana Kopecka, Jose Such, and Michael Luck
In PACM on Human-Computer Interaction - ACM Conference on Computer-Supported Cooperative Work and Social Computing (CSCW), 2024
@inproceedings{kopecka2024preferences,author={Kopecka, Hana and Such, Jose and Luck, Michael},title={Preferences for AI Explanations Based on Cognitive Style and Socio-Cultural Factors},year={2024},booktitle={PACM on Human-Computer Interaction - ACM Conference on Computer-Supported Cooperative Work and Social Computing (CSCW)},pages={In press.}}
AAAI
Moral Uncertainty and the Problem of Fanaticism
Jazon Szabo, Natalia Criado, Jose Such, and Sanjay Modgil
In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 2024
@inproceedings{szabo2024moral,title={Moral Uncertainty and the Problem of Fanaticism},author={Szabo, Jazon and Criado, Natalia and Such, Jose and Modgil, Sanjay},booktitle={Proceedings of the AAAI Conference on Artificial Intelligence (AAAI)},volume={38},number={18},pages={19948--19955},year={2024}}
CUI
Building Better AI Agents: A Provocation on the Utilisation of Persona in LLM-based Conversational Agents
@inproceedings{sun2024building,author={Sun, Guangzhi and Zhan, Xiao and Such, Jose},title={Building Better {AI} Agents: {A} Provocation on the Utilisation of
Persona in LLM-based Conversational Agents},booktitle={{ACM} Conversational User Interfaces {(CUI)}},pages={35},year={2024},}
AIES
A Systematic Review of Ethical Concerns with Voice Assistants
William Seymour, Nicole Zhan, Mark Coté, and Jose Such
In AAAI/ACM Conference on AI, Ethics, and Society (AIES), 2023
@inproceedings{seymour2023systematic,title={A Systematic Review of Ethical Concerns with Voice Assistants},author={Seymour, William and Zhan, Nicole and Cot{\'e}, Mark and Such, Jose},year={2023},pages={131--145},booktitle={AAAI/ACM Conference on AI, Ethics, and Society (AIES)},}
CIKM
AI in the Gray: Exploring Moderation Policies in Dialogic Large Language Models vs. Human Answers in Controversial Topics
Vahid Ghafouri, Vibhor Agarwal, Yong Zhang, Nishanth Sastry, Jose Such, and Guillermo Suarez-Tangil
In The Conference on Information and Knowledge Management (CIKM), 2023
@inproceedings{ghafouri2023ai,title={AI in the Gray: Exploring Moderation Policies in Dialogic Large Language Models vs. Human Answers in Controversial Topics},author={Ghafouri, Vahid and Agarwal, Vibhor and Zhang, Yong and Sastry, Nishanth and Such, Jose and Suarez-Tangil, Guillermo},booktitle={The Conference on Information and Knowledge Management (CIKM)},pages={556–565},year={2023},}
AIES
Not So Fair: The Impact of Presumably Fair Machine Learning Models
Mackenzie Jorgensen, Hannah Richert, Elizabeth Black, Natalia Criado, and Jose Such
In Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society (AIES), 2023
@inproceedings{jorgensen2023not,author={Jorgensen, Mackenzie and Richert, Hannah and Black, Elizabeth and Criado, Natalia and Such, Jose},title={Not So Fair: The Impact of Presumably Fair Machine Learning Models},year={2023},booktitle={Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society (AIES)},pages={297–311},numpages={15},}
JAAMAS
An explainable assistant for multiuser privacy
Francesca Mosca and Jose Such
Autonomous Agents and Multi-Agent Systems (JAAMAS), 2022
@article{mosca2022explainable,title={An explainable assistant for multiuser privacy},author={Mosca, Francesca and Such, Jose},year={2022},volume={36},pages={1--45},journal={Autonomous Agents and Multi-Agent Systems (JAAMAS)},issn={1387-2532},number={10},}
IEEE Tech&Society
Bias and Discrimination in AI: a cross-disciplinary perspective
Xavier Ferrer, Tom Nuenen, Jose Such, Mark Cote, and Natalia Criado
@article{ferrer2021bias,title={Bias and Discrimination in AI: a cross-disciplinary perspective},author={Ferrer, Xavier and van Nuenen, Tom and Such, Jose and Cote, Mark and Criado, Natalia},journal={IEEE Technology and Society},volume={20},number={2},pages={72--80},year={2021},}
Computer
Transparency for Whom? Assessing Discriminatory Artificial Intelligence
Tom Nuenen, Xavier Ferrer, Jose Such, and Mark Cote
@article{vanNuenen2020transparency,title={Transparency for Whom? Assessing Discriminatory Artificial Intelligence},author={van Nuenen, Tom and Ferrer, Xavier and Such, Jose and Cote, Mark},journal={IEEE Computer},volume={53},pages={36--44},year={2020},}
See more related publications in our Publications page.