RumourEval: Determining rumour veracity and support for rumours

RumourEval: Determining rumour veracity and support for rumours

The 2016 word of the year, according to Oxford Dictionaries, is
«post-truth». More and more we need to detect and handle the masses of fake
news present on the web, pushing our societies.

This task aims to identify and handle rumours and reactions to them, in
text. We present an annotation scheme, a large dataset covering multiple
topics — each having their own families of claims and replies —  and
concrete subtasks.

The task of analysing and determining veracity of social media content has
been of recent interest to the field of natural language processing. After
initial work, increasingly advanced systems and annotation schemas have
been developed to support the analysis of rumour and misinformation in
text. Veracity judgment can be decomposed intuitively in terms of a
comparison between assertions made in — and entailments from — a
candidate text, and external world knowledge, this leads to a veracity
judgment. Intermediate linguistic cues have also been shown to play a role.
Critically, based on recent work the task appears deeply nuanced and very
challenging, while having important applications in, for example,
journalism and disaster mitigation.

RumourEval is a shared task where participants analyse rumours in the form
of claims made in user-generated content, and where users respond to one
another within conversations attempting to resolve the veracity of the
rumour. We define a rumour as a «circulating story of questionable
veracity, which is apparently credible but hard to verify, and produces
sufficient skepticism and/or anxiety so as to motivate finding out the
actual truth». While breaking news unfold, gathering opinions and evidence
from as many sources as possible as communities react becomes crucial to
determine the veracity of rumours and consequently reduce the impact of the
spread of misinformation.

Within this scenario where one needs to listen at, and assess the testimony
of, different sources to make a final decision with respect to a rumour’s
veracity, we propose to run a task in SemEval consisting of the following
two subtasks:

* determining whether statements from different sources support, deny,
query or comment on rumours
* veracity prediction

The website for the task is:

Important dates:

Mon 05 Sep 2016: Training data ready
Mon 09 Jan 2017: Evaluation start
Mon 30 Jan 2017: Evaluation end
Mon 06 Feb 2017: Results posted
Mon 27 Feb 2017: Paper submissions due
Mon 03 Apr 2017: Author notifications
Mon 17 Apr 2017: Camera ready submissions due

Leon Derczynski, University of Sheffield
Kalina Bontcheva, University of Sheffield
Maria Liakata, University of Warwick
Arkaitz Zubiaga, University of Warwick
Rob Procter, University of Warwick
Geraldine Wong Sak Hoi, SWI

RumourEval is part of the PHEME project, , a
three-year project on Computing Veracity ? the Fourth Challenge of Big
Data, and funded by the EC.

Об авторе Лидия Пивоварова

СПбГУ - старший преподаватель, University of Helsinki - PhD student
Запись опубликована в рубрике Конференции, Ресурсы/Софт. Добавьте в закладки постоянную ссылку.

Добавить комментарий

Ваш e-mail не будет опубликован. Обязательные поля помечены *