1st International Workshop on
Disinformation and Toxic Content Analysis
(DiTox 2023)
September 12th, 2023
In conjunction with the 4th biennial conference on Language, Data and Knowledge (LDK 2023) to be held in Vienna, Austria in September 2023
The spread of misinformation and disinformation not only affects people's perceptions and beliefs, but can also have a direct impact on democratic institutions, critical infrastructure, and lives and families. Most critically, it raises the more fundamental issue of what sources of information can be trusted at all, potentially calling into question our relationship of trust with traditional media. Because of these profoundly harmful effects, disinformation is seen as one of the most pressing problems of our time.
The weak definition of the research task of disinformation analysis and detection, as well as the enormous range in terms of the heterogeneity and multimodality of the data involved, make this an exceptionally challenging field of research. The complexity ranges from media tampering detection to text content analysis to large-scale information fusion to analyze disinformation trends. Maintaining a comprehensive overview is equally difficult.
Respectively, the overall goal of this workshop is therefore to provide insights on how approaches from different domains can be used to address disinformation at a technical level including AI/ML-based methods, visual analytics, and visualization approaches as well as interdisciplinary approaches inspired by the social sciences (i.e., computational social science). To this end, we invite task-specific contributions, as well as large-scale integration approaches, demo and project presentations, to provide a comprehensive overview of the current state of the art in countering disinformation.
Program
Registration | |||
Welcome Message by Workshop Organizers | |||
Keynote: Dr. Nikos Sarris | |||
Short Paper Session | |||
A First Attempt to Detect Misinformation in Russia-UkraineWar News Nina Khairova, Bogdan Ivasiuk, Fabrizio Lo Scudo, Carmela Comito and Andrea Galassi |
|||
WIDISBOT:Widget to analyse disinformation and content spread by bots Jose Manuel Camacho, Luis Perez-Miguel and David Arroyo |
|||
Exploring Intensities of Hate Speech on Social Media
Raisa Romanov Geleta, Klaus Eckelt, Emilia Parada-Cabaleiro and Markus Schedl |
|||
Coffee break | Long Paper Session | ||
Debunking Disinformation with GADMO Jonas Rieger, Nico Hornig, Jonathan Flossdorf, Henrik Müller, Stephan Mündges, Carsten Jentsch, Jörg Rahnenführer and Christina Elmer |
|||
Assessing Italian News Reliability in the Health Domain through Text Luca Giordano and Maria Pia di Buono |
|||
Cross-Lingual Transfer Learning for Misinformation Detection Oguzhan Ozcelik, Arda Sarp Yenicesu, Onur Yildirim, Dilruba Sultan Haliloglu, Erdem Ege Eroglu and Fazli Can |
|||
Closing Words | |||
Lunch |
Keynote
Dr. Nikos Sarris
Nikos Sarris is a Senior Researcher at MeVer, the Media Analysis, Verification and Retrieval group of the Information Technologies Institute of CERTH, with more than 20 years of experience in R&D projects as a researcher, project manager and coordinator of large multinational consortia. He has been involved in many projects focusing on the semantic ‘understanding’ of news content and the assessment of its trustworthiness, coordinating relevant projects, activities and product development. He is currently the project coordinator of the Mediterranean Digital Media Observatory (MedDMO) and a member of the Management Committee of EDMO.
Title:
Disinformation: challenges, tools and techniques to deal or live with it
Abstract:
An introduction to the issues carried by disinformation will be followed by an analysis of methods, technological tools, prototypes and products that have shown to be helpful in countering the problem. A discussion on existential issues will also raise concerns on how possible or even desirable it is to eradicate all forms of disinformation.
Topics
Full Paper Submissions:
- Machine and Deep learning methods for disinformation (e.g., analysis, detection)
- Visual analytics and visualization approaches for disinformation
- Social network analysis (e.g., key actors, distribution patterns) including visualization approaches
- Graph algorithms for disinformation identification
- Natural language processing methods (e.g., content evaluation, toxicity, radicalization)
- AI-supported fact checking and detection of disinformation campaigns
- Identification of fabricated and manipulated content (e.g., deep fakes, generated text)
- Community detection and characterization in social networks (e.g., conspiracy theories, echo chambers)
- Bots characterization and detection
- Multimodal fake content detection
- Recommendation systems and disinformation
- AI uses, practices and tools in fact-checking journalism
- Qualitative and quantitative studies on disinformation
- Ethics and law in disinformation
Demo and Project Presentation (Short Paper Track, Poster Presentation):
- Demo presentations (e.g., fact checking tools, disinformation detection tools)
- Project platform presentations
- Project presentations
Important Dates
- Paper submission:
May 21st, 2023June 11th, 2023 - Notification:
June 20th, 2023June 21st, 2023 - Camera-ready submission deadline:
June 9th, 2023June 28th, 2023 - DiTox workshop: September 12th, 2023
Submission
Submissions can be in the form of Long papers (9–12 pages) and Short papers (4-6 pages). All submission lengths are given including references. Accepted submissions will be published by ACL in an open-access conference proceedings volume, free of charge for authors. The reviewing process is single-blind, submissions should not be anonymised. The workshop will be hybrid (face-to-face and remote). At least one author of each accepted paper must register to present the paper at the workshop (either remotely or on-site). There will be no registration fee administered for participating in LDK 2023. Papers should be submitted via E-Mail at the following address: ditox@ait.ac.at
Organizing Committee
Alexander Schindler, Austrian Insititue of Technology GmbH
Mina Schütz, Austrian Institute of Technology GmbH, Darmstadt University of Applied Sciences
Melanie Siegel, Darmstadt University of Applied Sciences
Kawa Nazemi, Darmstadt University of Applied Sciences
Matthias Zeppelzauer, St. Poelten University of Applied Sciences
Djordje Slijepčević, St. Poelten University of Applied Sciences