Human-centred AI Security, Ethics and Privacy
This talk will have two main parts: 1) risks in AI systems; 2) AI for online safety. In the first part, I will talk about three risks of using AI: security, privacy, and discrimination. I will show that attacks that can be performed to exploit AI models and attack the systems that use them, that AI-based systems can be privacy-intrusive, and that AI-based systems may have biases that may lead to discriminate against particular types of users (e.g. based on gender/ethnicity). I will then outline our current research and projects on making AI safer. In the second part, I will talk about using AI to help people stay safe online. I will showcase our current research on using AI to study online language, particularly focusing on biased and toxic language and echo chambers, and to protect online privacy.
Ciudad Politécnica de la Innovación
Edificio 8E, Acceso J, Planta 4ª (Sala Descubre. Cubo Rojo)
Universidad Politécnica de Valencia | Camino de Vera s/n
Prof Jose Such is (Full) Professor in Computer Science at King’s College London and part-time Professor at UPV. He is the founder and head of the Human-centred AI Security, Ethics and Privacy (HASP) Lab, and founder and Director of the King’s Cybersecurity Centre, an Academic Centre of Excellence in Cyber Security Research (ACE-CSR) recognised by NCSC and EPSRC. Before being promoted to professor, he was Reader (2018-2021) and Senior Lecturer (2016-2018) at King’s College London, and Lecturer (2012-2016) at Lancaster University. His research is cross-disciplinary, employing computer science and social science methods, and with interests at the intersection between Artificial Intelligence, Human-Computer Interaction, and Cyber Security. His research has been funded through a multi-million pound portfolio of projects by UKRI, EPSRC, Google, ICO, UK Government, and InnovateUK.