Call for Papers
Disinformation spreads easily in online social networks and is propagated by social media actors and network communities to achieve specific (mostly malevolent) objectives. Disinformation has deleterious effects on users’ real lives since it distorts their points of view regarding societally-sensitive topics, such as politics, health or religion. Ultimately, it has a negative effect on the very fabric of democratic societies and should be fought against via an effective combination of human and technical means.
Disinformation campaigns are increasingly powered by advanced AI techniques and a lot of effort was put into the detection of fake content. While important, this is only a piece of the puzzle if one wants to address the phenomenon in a comprehensive manner. Whether a piece of information is considered fake or true often depends on the temporal and cultural contexts in which it is interpreted. This is for instance the case for scientific knowledge, which evolves at a fast pace, and whose usage in mainstream content should be updated accordingly.
Multimedia content is often at the core of AI-assisted disinformation campaigns. Their impact is directly related to the perceived credibility of their content. Significant advances related to the automatic generation/manipulation of each modality were obtained with the introduction of dedicated deep learning techniques. Visual content can be tampered with in order to produce manipulated but realistic versions of it. Synthesized speech has attained a high quality level and is more and more difficult to distinguish from the actual voice. Deep language models learned on top of huge corpora allow the generation of text which resembles that written by humans. Combining these advances has the potential to boost the effectiveness of disinformation campaigns. This combination is an open research topic which needs to be addressed in order to reduce the effects of disinformation campaigns. This workshop welcomes contributions related to different aspects of AI-powered disinformation.
Topics of interest include but are not limited to:
- Disinformation detection in multimedia content (e.g., video, audio, texts, images)
- Multimodal verification methods
- Synthetic and manipulated media detection
- Multimedia forensics
- Disinformation spread and effects in social media
- Analysis of disinformation campaigns in societally-sensitive domains
- Robustness of media verification against adversarial attacks and real-world complexities
- Fairness and non-discrimination of disinformation detection in multimedia content
- Explaining disinformation /disinformation detection technologies for non-expert users
- Temporal and cultural aspects of disinformation
- Dataset sharing and governance in AI for disinformation
- Datasets for disinformation detection and multimedia verification
- Open resources, e.g., datasets, software tools
- Multimedia verification systems and applications
- System fusion, ensembling and late fusion techniques
- Benchmarking and evaluation frameworks
The workshop is supported under project H2020 AI4Media "A European Excellence Centre for Media, Society and Democracy", grant #951911, axis ICT-48-2020 / Towards a vibrant European network of AI excellence centres, https://www.ai4media.eu/, the Horizon Europe project vera.ai “VERification Assisted by Artificial Intelligence”, https://www.veraai.eu/, and the Horizon Europe project AI4Debunk “Participative Assistive AI-powered Tools for Supporting Trustworthy Online Activity of Citizens and Debunking Disinformation“, https://ai4debunk.eu/.