Seminars

Upcoming events

20 September 2021, 16:00 UK Time
Srijan Kumar
Are malicious user detection models robust to adversaries?
[Zoom Registration TBA] [YouTube Live Stream TBA]


Details Abstract: Deep-learning based fraud detection models are used widely in practice to detect anti-social and fraudulent entities on web and social media platforms. However, the adversaries are incentivized to adapt their behavior to fool the models in order to go undetected. Can they? In this talk, I will investigate the vulnerabilities of such models and show how adversaries can fool popularly-used detection models. This talk will primarily be based on the KDD 2021 paper "PETGEN: Personalized Text Generation Attack on Deep Sequence Embedding-based Classification Models", whose code and data are here.

Bio: Srijan Kumar is an Assistant Professor at the College of Computing at Georgia Institute of Technology. His research develops data science solutions to address the high-stakes challenges on the web and in the society. He has pioneered the development of user models and network science tools to enhance the well-being and safety of users. His methods are being used in production at Flipkart and taught at graduate level courses worldwide. He has received several awards including the Facebook Faculty Award, Adobe Faculty Award, ACM SIGKDD Doctoral Dissertation Award runner-up 2018, Larry S. Davis Doctoral Dissertation Award 2018, and 'best of' award from WWW. His research has been the subject of a documentary and covered in popular press, including CNN, The Wall Street Journal, Wired, and New York Magazine. He completed his postdoctoral training at Stanford University, received a Ph.D. in Computer Science from University of Maryland, College Park, and B.Tech. from Indian Institute of Technology, Kharagpur.


Previous events

19 July 2021, 16:00 UK Time
Benjamin Horne
Tailoring heuristics and timing AI interventions for supporting news veracity assessments
[Recording]


Details Abstract: The detection of false and misleading news has become a top priority to researchers and practitioners. Despite the large number of efforts in this area, many questions remain unanswered about the ideal design of interventions, so that they effectively inform news consumers. In this work, we seek to fill part of this gap by exploring two important elements of tools’ design: the timing of news veracity interventions and the format of the presented interventions. Specifically, in two sequential studies, using data collected from news consumers through Amazon Mechanical Turk (AMT), we study whether there are differences in their ability to correctly identify fake news under two conditions: when the intervention targets novel news situations and when the intervention is tailored to specific heuristics. We find that in novel news situations users are more receptive to the advice of the AI, and further, under this condition tailored advice is more effective than generic one. We link our findings to prior literature on confirmation bias and we provide insights for news providers and AI tool designers to help mitigate the negative consequences of misinformation.

Bio: Ben Horne is an Assistant professor in the School of Information Sciences at The University of Tennessee Knoxville. He received his Ph.D. in Computer Science from Rensselaer Polytechnic Institute in Troy, New York, where he received the Robert McNaughton Prize for outstanding graduate in Computer Science. Dr. Horne is a highly interdisciplinary, computational social scientist whose research focuses on safety in media spaces. Broadly, this research includes analyzing disinformation, propaganda, conspiracy theories, and the like in both social media and news media. His work has been published in conference venues such as ICWSM and TheWebConference (WWW), and in journals such as ACM Transactions of Intelligent Systems Technology and Computers in Human Behavior. Additionally, Dr. Horne’s work has been widely covered in news media, such as Business Insider, Mashable, IEEE Spectrum, and YLE.


28 June 2021, 16:00 UK Time
Gareth Tyson
Exploring (Mis)Use of the WhatsApp Messaging Platform
[Recording]


Details Abstract: In this presentation, I will detail some of our recent work on WhatsApp. Through a set of empirical measurements, I will discuss ways in which WhatsApp has been used and misused by people around the world, covering topics such as spam and misinformation. I will conclude the presentation by discussing ways in which such activities could be moderated without compromising end-to-end encryption.

Bio: Gareth Tyson is a Senior Lecturer (Associate Professor) at Queen Mary University of London, and a Fellow at the Alan Turing Institute. He is Deputy Director of the Institute of Applied Data Science (IADS) and co-leads the Social Data Science Lab (SDS). His research is in the broad area of Internet Data Science.His work has received coverage from news outlets such as MIT Tech Review, Washington Post, Slashdot, BBC, The Times, Daily Mail, Wired, Science Daily, Ars Technica, The Independent, Business Insider and The Register. He recieved the Outstanding Reviewer Award four times at ICWSM (2016, 2018, 2019, 2021); received the Best Student Paper Award at the Web Conference 2020; the Best Paper Award at eCrime'19; the Honourable Mention Award at the Web Conference 2018 (best paper in track); and the Best Presentation Award at INFOCOM'18.


17 May 2021, 16:00 UK Time
iDRAMA Lab members

Ask Me Anything!


Details This event will be different compared to the previous ones; we will not have a speaker but we will have various members of the IDrama Lab participating in an Ask Me Anything event (AMA). Participants will be able to pretty much ask anything including research stuff, academic life, etc. The following IDrama people have confirmed their presence in this event and will be available to answer questions:


19 April 2021, 16:00 UK Time
Megan Squire, Elon University
Using Data Science to Understand Extremist Group Financing
[Recording]


Details Abstract: In this talk, Megan Squire will explain how she uses the data science process to understand the complex socio-technical phenomena that drive online hate, particularly how hate groups finance their propaganda and activities. While it can be difficult to understand how far-right extremists fundraise due to the secretive nature of the activity and because of the difficulty of getting data from social media platforms, Dr. Squire's work uses publicly-available data to understand the financial structure of the clandestine far-right. Her research on extremist group financing has been featured in The New York Times, The Guardian, WIRED, and numerous other venues.

Bio: Dr. Megan Squire is a professor of Computer Science at Elon University. Her main research area is applying data science techniques to understand niche and extremist online communities, particularly radical right-wing groups on social media. Dr. Squire is the author of two books on data cleaning and data mining, and over 40 peer-reviewed articles and book chapters, including several Best Paper awards. In 2017, she was named the Elon University Distinguished Scholar. She currently serves as a Senior Fellow for data analytics at the Southern Poverty Law Center, and as a Senior Fellow and head of the Technical Research Unit at the Center for Analysis of the Radical Right.


16 March 2021, 16:00 UK Time
Fabrício Benevenuto, UFMG
Deploying Real Systems to Counter Misinformation Campaigns
[Recording]


Details Abstract: The political debate and electoral dispute in the online space have been marked by an information war in many recent elections. In order to mitigate the misinformation problem, we developed technological solutions able to reduce the abuse of misinformation campaigns in the online space and we deployed it along the 2018 Brazilian elections. Particularly, we created a system to monitor public groups in WhatsApp and a system to monitor Ads in Facebook, bringing some transparency for the campaigns on these online spaces. Our systems showed to be fundamental for fact-checking and investigative journalism.

Bio: Fabrício Benevenuto is associate professor in the Computer Science Department of Federal University at Minas Gerais (UFMG) and a former member of the Brazilian Academy of Science (2013-2017). In 2017, he received a Humboldt fellowship through which he was a visiting faculty at Max Planck Institute. He is author of widely cited and awarded papers, including the test-of-time award from ICWSM and a best nominee at WWW, both received in 2020. Currently, he leads a series of projects towards understanding, measuring, and countering misinformation campaigns in social networks. His work on these topics has led to a large number of relevant publications, widely cited papers, and systems with real world impact.


FAQs

  • How frequent are the seminars?
    For now, we plan to have one seminar a month (tentatively, on the third Monday). If we get a critical mass of participants and speakers, we will switch to every two weeks.
  • At what time are the seminars?
    11 AM Eastern Time / 4 PM UK Time / 5 PM Central Europe Time. As different countries switch to daylight savings time at different times, please check the link on the schedule to convert to your time zone.
  • How do I subscribe to seminar announcements?
    You can join our Google Group (note that you need to be signed in with a Google account). You can also subscribe to our Google Calendar: [ICS] [HTML].
  • How do I join the seminars?
    You can either join on Zoom (you’ll need to register to each event with an existing Zoom account) or watch the livestream on YouTube. We also record each seminar, so please subscribe to our channel.
  • Any other questions?
    Please contact Savvas Zannettou or Antonis Papasavva.