Tin mới

Long Tran-Thanh is a Professor at the Department of Computer Science, University of Warwick, UK. He is currently the Director of Research of the department (Deputy-Head) and the university's Chair of Digital Research Spotlight. Long has been doing active research in a number of key areas of Artificial Intelligence and multi-agent systems, mainly focusing on multi-armed bandits, game theory, and incentive engineering, and their applications to Al for Social Good. He has published more than 80 papers at peer-reviewed A* conferences in Al/ML (including AAAI, AAMAS, CVPR, IUCAI, NeurIPS) and journals (JAAMAS, AlJ), and have received a number of prestigious national/international awards, including 2 best paper honourable mention awards at top-tier Al conferences (AAAI, ECAI), 2 Best PhD Thesis Award Honourable Mentions (UK's BCS and Europe's ECCAl/EurAl), and the co-recipient of the 2021 AlJ Prominent Paper Award (for one of the 2 most influential papers between 2014-2021 published at the Artificial Intelligence Journal). Long has also been actively involved in a number of community services, including being the local co-chair for AAMAS 2021, AAMAS 2023, KR 2021, KR 2024, and AAMAS 2027. He is an Associate Editor for JAAMAS, and a member of the Editorial Board for AlJ. Previously he was a member of the IFAAMAS Board of Directors between 2018-2024 and a Turing Fellow at the Alan Turing Institute, UK.

Students, graduate students, PhD candidates, and interested colleagues are welcome to join the seminar with the following details:

  • Time: 11:00 AM, Monday, November 25, 2024
  • Venue: Room E202, Campus I, University of Science, 227 Nguyen Van Cu Street, Ward 4, District 5, Ho Chi Minh City
  • Title: Attacking Reinforcement Learning Agents via Data Poisoning and How to Defend
  • Abstract:

Bandit algorithms and Reinforcement Learning models have been widely used in many successful applications in the recent years. However, it has been shown that these algorithms are vulnerable against data poisoning attacks, where an Adversary can manipulate the feedback of our Agent, guiding it to learn a suboptimal (or a targeted) behaviour on the long run. In this talk I will discuss the theoretical boundaries of such attacks, such as what the provable necessary and sufficient conditions are for a successful attack against different types of learning agents. I will also discuss a verification based way of defence mechanism against such data poisoning attacks. This talk is a summary of our recent papers published a AAAI 2022, ISCAl 2022, AAMAS 2024, with some new unpublished results.

02e219d9-bf84-4354-a734-c665cc3ea58d.jpeg