DetailsAbstract: The detection of false and misleading news has become a top priority to researchers and practitioners. Despite the large number of efforts in this area, many questions remain unanswered about the ideal design of interventions, so that they effectively inform news consumers. In this work, we seek to fill part of this gap by exploring two important elements of tools’ design: the timing of news veracity interventions and the format of the presented interventions. Specifically, in two sequential studies, using data collected from news consumers through Amazon Mechanical Turk (AMT), we study whether there are differences in their ability to correctly identify fake news under two conditions: when the intervention targets novel news situations and when the intervention is tailored to specific heuristics. We find that in novel news situations users are more receptive to the advice of the AI, and further, under this condition tailored advice is more effective than generic one. We link our findings to prior literature on confirmation bias and we provide insights for news providers and AI tool designers to help mitigate the negative consequences of misinformation.
Bio: Ben Horne is an Assistant professor in the School of Information Sciences at The University of Tennessee Knoxville. He received his Ph.D. in Computer Science from Rensselaer Polytechnic Institute in Troy, New York, where he received the Robert McNaughton Prize for outstanding graduate in Computer Science. Dr. Horne is a highly interdisciplinary, computational social scientist whose research focuses on safety in media spaces. Broadly, this research includes analyzing disinformation, propaganda, conspiracy theories, and the like in both social media and news media. His work has been published in conference venues such as ICWSM and TheWebConference (WWW), and in journals such as ACM Transactions of Intelligent Systems Technology and Computers in Human Behavior. Additionally, Dr. Horne’s work has been widely covered in news media, such as Business Insider, Mashable, IEEE Spectrum, and YLE.
DetailsAbstract: In this presentation, I will detail some of our recent work on WhatsApp. Through a set of empirical measurements, I will discuss ways in which WhatsApp has been used and misused by people around the world, covering topics such as spam and misinformation. I will conclude the presentation by discussing ways in which such activities could be moderated without compromising end-to-end encryption.
Bio: Gareth Tyson is a Senior Lecturer (Associate Professor) at Queen Mary University of London, and a Fellow at the Alan Turing Institute. He is Deputy Director of the Institute of Applied Data Science (IADS) and co-leads the Social Data Science Lab (SDS). His research is in the broad area of Internet Data Science.His work has received coverage from news outlets such as MIT Tech Review, Washington Post, Slashdot, BBC, The Times, Daily Mail, Wired, Science Daily, Ars Technica, The Independent, Business Insider and The Register. He recieved the Outstanding Reviewer Award four times at ICWSM (2016, 2018, 2019, 2021); received the Best Student Paper Award at the Web Conference 2020; the Best Paper Award at eCrime'19; the Honourable Mention Award at the Web Conference 2018 (best paper in track); and the Best Presentation Award at INFOCOM'18.
Ask Me Anything!
DetailsThis event will be different compared to the previous ones; we will not have a speaker but we will have various members of the IDrama Lab participating in an Ask Me Anything event (AMA). Participants will be able to pretty much ask anything including research stuff, academic life, etc. The following IDrama people have confirmed their presence in this event and will be available to answer questions:
DetailsAbstract: In this talk, Megan Squire will explain how she uses the data science process to understand the complex socio-technical phenomena that drive online hate, particularly how hate groups finance their propaganda and activities. While it can be difficult to understand how far-right extremists fundraise due to the secretive nature of the activity and because of the difficulty of getting data from social media platforms, Dr. Squire's work uses publicly-available data to understand the financial structure of the clandestine far-right. Her research on extremist group financing has been featured in The New York Times, The Guardian, WIRED, and numerous other venues.
Bio: Dr. Megan Squire is a professor of Computer Science at Elon University. Her main research area is applying data science techniques to understand niche and extremist online communities, particularly radical right-wing groups on social media. Dr. Squire is the author of two books on data cleaning and data mining, and over 40 peer-reviewed articles and book chapters, including several Best Paper awards. In 2017, she was named the Elon University Distinguished Scholar. She currently serves as a Senior Fellow for data analytics at the Southern Poverty Law Center, and as a Senior Fellow and head of the Technical Research Unit at the Center for Analysis of the Radical Right.
DetailsAbstract: The political debate and electoral dispute in the online space have been marked by an information war in many recent elections. In order to mitigate the misinformation problem, we developed technological solutions able to reduce the abuse of misinformation campaigns in the online space and we deployed it along the 2018 Brazilian elections. Particularly, we created a system to monitor public groups in WhatsApp and a system to monitor Ads in Facebook, bringing some transparency for the campaigns on these online spaces. Our systems showed to be fundamental for fact-checking and investigative journalism.
Bio: Fabrício Benevenuto is associate professor in the Computer Science Department of Federal University at Minas Gerais (UFMG) and a former member of the Brazilian Academy of Science (2013-2017). In 2017, he received a Humboldt fellowship through which he was a visiting faculty at Max Planck Institute. He is author of widely cited and awarded papers, including the test-of-time award from ICWSM and a best nominee at WWW, both received in 2020. Currently, he leads a series of projects towards understanding, measuring, and countering misinformation campaigns in social networks. His work on these topics has led to a large number of relevant publications, widely cited papers, and systems with real world impact.
- How frequent are the seminars?
For now, we plan to have one seminar a month (tentatively, on the third Monday). If we get a critical mass of participants and speakers, we will switch to every two weeks.
- At what time are the seminars?
11 AM Eastern Time / 4 PM UK Time / 5 PM Central Europe Time. As different countries switch to daylight savings time at different times, please check the link on the schedule to convert to your time zone.
- How do I subscribe to seminar announcements?
You can join our Google Group (note that you need to be signed in with a Google account). You can also subscribe to our Google Calendar: [ICS] [HTML].
- How do I join the seminars?
You can either join on Zoom (you’ll need to register to each event with an existing Zoom account) or watch the livestream on YouTube. We also record each seminar, so please subscribe to our channel.
- Any other questions?
Please contact Savvas Zannettou or Antonis Papasavva.