Thursday, October 7 from 8:30 am to 3:30 pm EST | 1:30 pm to 8:30 pm UTC+1
|8:30-9:00 am||1:30-2:00 pm||Opening Session|
|9:00-9:45 am||2:00-2:45 pm||Keynote Talk 1||“Experiences from the frontline: Winning, losing and learning in the fight against misinformation”, Lyric Jain|
|9:50-10:50 am||2:50-3:50 pm||Talks – Session 1||– “Enhanced Image Forensic Tools for Fact-Checkers”, Marina Gardella|
– “Online Misinformation is Linked to COVID-19 Vaccination Hesitancy and Refusal”, Francesco Pierri
– “Why good national statistics are so important for fact-checkers”, Fionntán O’Donnell
– “Understanding and Detecting Online Hate Expressed in Emoji”, Hannah Rose Kirk
|11:00-11:45 am||4:00-4:45 pm||Interactive Workshop||“Verification for All”, François D’Astier|
|12:00-1:00 pm||5:00-6:00 pm||Live Panel 1||“Regulation and Platforms: How can practice inform policy?”, Farah Lalani, Susan Ness, Yoel Roth, Jillian York|
|1:30- 2:15 pm||6:30-7:15 pm||Keynote Talk 2||“A Large-Scale, Multi-Platform, and Multi-Modal Look at Online Disinformation”, Gianluca Stringhini|
|2:20-3:20 pm||7:20-8:20 pm||Talks – Session 2||– “A Holistic Approach to Fighting the COVID-19 Infodemic in Social Media: Modeling the Perspective of Journalists, Fact-Checkers, Social Media Platforms, Policy Makers, and Society”, Preslav Nakov|
– “The Middle Ground Between Manual and Automatic Fact-Checking: Detecting Previously Fact-Checked Claims”, Giovanni Da San Martino
– “How Misinformation Works for People, Not On Them”, Michael Simeone, Kristy Roschke and Shawn Walker
– “Tiplines to Combat Misinformation on Encrypted Platforms: A Case Study of the 2019 Indian Election on WhatsApp”, Ashkan Kazemi, Scott Hale
Friday, October 8 from 8:30 am to 3 pm EST | 1:30 pm to 8 pm UTC+1
|8:45-9:45 am||1:45-2:45 pm||Talks – Session 3||– “Visualizing the Dimensions of Disinformation Campaigns”, Amruta Deshpande and Justin Hendrix|
– “Blackmarket-driven Collusive Attacks on Online Media Platforms”, Tanmoy Chakraborty
– “E-BART: Jointly Predicting and Explaining Truthfulness”, Erik Brand
– “People Expect Joint Accountability for Online Misinformation”, Gabriel Lima
|9:50-10:50 am||2:50-3:50 pm||Talks – Session 4||– “An Overview of Research on Knowledge Integrity in Wikimedia Projects”, Diego Saez-Trumper and Pablo Aragon|
– “Cross-lingual Rumour Stance Classification: A First Study with BERT and Machine Translation”, Carolina Scarton
– “Game of FAME: Automatic Detection of FAke MEmes”, Bahruz Jabiyev
– “The Emergence of “Deepfakes” and Its Societal Implications: A Systematic Review”, Dilrukshi Gamage
|10:55-12:00 pm||3:55-5:00 pm||Live Panel 2||“Computational fact-checking tools: are they ready for the real world?”, Preslav Nakov, Jennifer Mathieu, Shalini Joshi, Jochen Spangenberg, David Corney|
|12:15-1:00 pm||5:15-6:00 pm||Keynote Talk 3||“AI-driven disinformation analysis and moderation: Are we there yet?”, Kalina Bontcheva|
|1:05- 1:55 pm||6:05-6:55 pm||Talks – Session 5||– “Algorithmic Governance: Auditing Search and Recommendation Algorithms for Misinformation”, Tanu Mitra|
– “AletheiaFact.org: Creating A Digital Platform to Empower Journalists And Fact-Checkers During The Brazilian Presidential Elections”, Mateus Batista Santos, Tamiris Tinti Volcean
– “Human-in-the-Loop Systems for Truthfulness: A Study of Human and Machine Confidence”, Yunke QU
|2:10- 3:00 pm||7:10-8:00 pm||Talks – Session 5 (cont’d)||– “Detecting Harm in Voice Communications”, Mike Pappas|
– “Misinformation Interventions Are Common, Divisive, And Poorly Understood. How Should They Be Designed?”, Claire Leibowicz and Emily Saltz
– “Alternative Monetization on YouTube”, Yiqing Hua
|3:00-3:05 pm||8:00-8:05 pm||Closing Session|
All session descriptions are available on our event website.
Gianluca Stringhini (Boston University)
“A Large-Scale, Multi-Platform, and Multi-Modal Look at Online Disinformation”
Gianluca Stringhini is an Assistant Professor in the ECE Department at Boston University, holding affiliate appointments in the Computer Science Department, in the Faculty of Computing and Data Sciences, in the BU Center for Antiracist Research, and in the Center for Emerging Infectious Diseases Policy & Research. In his research, Gianluca takes a data-driven approach to better understand malicious activity on the Internet. Through the collection and analysis of large-scale datasets, he develops novel and robust mitigation techniques to make the Internet a safer place. His research involves a mix of quantitative analysis, (some) qualitative analysis, machine learning, crime science, and systems design. Over the years, Gianluca has worked on understanding and mitigating malicious activities like malware, online fraud, influence operations, and coordinated online harassment. He received multiple prizes including an NSF CAREER Award in 2020, and his research won multiple Best Paper Awards. Gianluca has published over 100 peer reviewed papers including several in top computer security conferences like IEEE Security and Privacy, CCS, NDSS, and USENIX Security, as well as top measurement, HCI, and Web conferences such as IMC, ICWSM, CSCW, and WWW.
Lyric Jain (Logically)
“Experiences from the frontline: Winning, losing and learning in the fight against misinformation”
Lyric founded Logically in 2017, after observing the breakdown in public discourse during the 2016 US presidential election, and the Brexit referendum in the UK. He first began identifying opportunities for early interventions during his time at MIT, and is motivated by the impact Logically can deliver through human-machine collaborations to mitigate risks posed by information disorder. Logically now works with governments, businesses and platforms around the world to uncover and address harmful misinformation and deliberate disinformation. This year, Lyric was awarded Enterprise CXO Leader of the Year in the CogX Awards, and Logically was named one of the world’s most innovative artificial intelligence companies by Fast Company.
Kalina Bontcheva (University of Sheffield)
“AI-driven disinformation analysis and moderation: Are we there yet?”
Prof. Dr. Kalina Bontcheva leads the Natural Language Processing Group at the Department of Computer Science, University of Sheffield. She is also a member of Sheffield’s Center for Freedom of the Media and Senior Research Leader at the Big Data and Smart Society Institute in Bulgaria. Between 2014 and 2016 Bontcheva conceived and lead the PHEME project, which was amongst the first EU projects to study computational methods for the detection and tracking of disinformation in social media. She is currently the scientific director of the WeVerify project, researching AI-based methods for disinformation detection and analysis. Most recently she coauthored an ITU/UNESCO study, Balancing Act: Countering Digital Disinformation while Respecting Freedom of Expression and two UNESCO policy briefs on Combating the Disinfodemic: Working for Truth in the Time of COVID-19.
Regulation and Platforms: How can practice inform policy?
Jillian York, Director for International Freedom of Expression at Electronic Frontiers Foundation
Yoel Roth, Head of Site Integrity at Twitter
Susan Ness, Distinguished Fellow at The German Marshall Fund of the United States
Farah Lalani, Community Lead at The World Economic Forum
Platforms are the leading space for democratic participation online. Their place in the economy has been cemented as the default form of digital markets. With such dominance over the digital marketplaces and online speech, who is better positioned to ensure users and customers are protected from harmful online content and polluted information networks?
The legal framework regulating online platforms and specifically third-party or user-generated content has evolved since the introduction of section 230 in the US and comparable legislation such as the EU e-commerce directive since the past two decades. Internally, platforms continued to update their content moderation policies with the introduction of automated content moderation systems. Transparency and particularly standardizing transparency reporting among platforms are both part of the latest push by legislators to ensure platforms conform to a standard of consumer protection and ensuring user safety. This panel will address questions such as:
- What is the state of information networks on platforms from a wider lens?
- What are the consensus aspects of content moderation and reducing illegal and harmful online content on platforms?
- How can we balance freedom of expression and democratic debate with content moderation and common practice?
- Can transparency reporting ultimately push consumers to pressure platforms to self-regulate?
- What are the missing/neglected gaps of common transparency reporting and who should have access to the data?
Computational fact-checking tools: are they ready for the real world?
Jennifer Mathieu, Chief Technical Officer at Graphika
Jochen Spangenberg, Deputy Head of Research & Cooperation Projects at Deutsche Welle
David Corney, Senior Data Scientist at Full Fact
Shalini Joshi, APAC Program Manager at Meedan
Preslav Nakov, Principal Scientist at Qatar Computing Research Institute (HBKU)
The accelerated growth of direct-to-consumer media ecosystems has given space to the spread of inaccurate and misleading information by different actors. Misinformation and disinformation have been the root cause in the spread of new conspiracy theory movements, political violence and, up to late, a schism towards COVID-19 vaccines uptake and other public health measures. To combat the amplified spread of dubious information, fact-checkers and journalists have adopted new tools to identify claims worth fact-checking, detect relevant previously fact-checked claims (e.g. ClaimReview, Squash), retrieve relevant evidence to fact-check a claim, and actually verify a claim. This panel will address questions such as:
- How do currently available technologies fare compared to what journalists and fact-checkers want?
- What are the challenges that need to be tackled to increase the impact of computational fact checking tools in the real-world?
- Is there a current concordance between journalists, platforms, technologists and publishers when it comes to automated fact-checking?
- What type of media can automated fact-checking easily detect and can the larger ecosystem of information verifiers contribute to automated fact-checking and reliability of information online information networks?
Verification for All
François D’Astier, Journalist at AFP Check
An interactive session showing how debunking and fact-checking work in practice, with the help of verification tools used by fact-checkers. A member of the AFP Fact Check team – a leading global fact-checking organisation launched in 2017 that operates in multiple languages and in around 80 countries, from the US to Thailand and from France to Myanmar – will show the process behind video/photo verification, geolocation and others, with help from specific tools and a smart use of resources freely available online. The audience will be actively involved with exercises and live interaction.