*Workshop on Relevance of Linguistic Structure in Neural Architectures for NLP*

*ACL 2018, July 19th*

*Papers Due: April 8th*

*https://sites.google.com/view/relsnnlp

 

*Call for long and short papers*

There is a long standing tradition in NLP focusing on fundamental language
modeling tasks such as morphological analysis, POS tagging, parsing, WSD or
semantic parsing. In the context of end-user NLP tasks, these have played
the role of enabling technologies, providing a layer of representation upon
which more complex tasks can be built. However, in recent years we have
witnessed a number of success stories for tasks ranging from information
extraction or text comprehension to machine translation, for which the use
of embeddings and neural networks has driven state of the art results to
new levels. More importantly, these are often end-to-end architectures
trained on large amounts of data and making little or no use of a
linguistically-informed language representation layer. For example, the
modeling of word senses and word sense disambiguation are implicit in the
functional composition of word embeddings. Other topics such as linear
sentence processing versus syntactic parses or frequency-based word
segmentation versus morphological analysis are still up for debate.

This workshop focuses on the role of linguistic structures in the neural
network era. We aim to gauge their significance in building better, more
generalizable NLP systems. We would like to address the following questions:

— Is linguistic information useful for neural network architectures: can
it improve state of the art neural architectures, and how should it be
used? Does it help in building models that transfer better to new domains,
new languages, new tasks, or for other limited annotated data scenarios?
— Are there any better implicit representations that neural networks can
extract, whether similar or not to linguistic structures, that can be
transferred or shared across tasks and, hence, serve as core language
representation layers?

*Long papers* may consist of up to eight (8) pages of content, plus
unlimited references; final versions of long papers will be given one
additional page of content (up to 9 pages) so that reviewers? comments can
be taken into account.

*Short papers* may consist of up to four (4) pages of content, plus
unlimited references. Upon acceptance, short papers will be given five (5)
content pages in the proceedings.

*Invited speakers*

— Chris Dyer
— Emily Bender
— Jason Eisner
— Mark Johnson

*Organizing Committee*

— Georgiana Dinu, Amazon
— Miguel Ballesteros, IBM Research AI
— Avirup Sil, IBM Research AI
— Anders Sögaard, University of Copenhagen
— Tahira Naseem, IBM Research AI
— Yoav Goldberg, Bar Ilan University
— Wael Hamza, IBM Research AI
— Samuel Bowman, New York University
— Radu Florian, IBM Research AI

Об авторе Лидия Пивоварова

СПбГУ - старший преподаватель, University of Helsinki - PhD student http://philarts.spbu.ru/structure/sub-faculties/itah_phil/teachers/pivovarova
Запись опубликована в рубрике Конференции. Добавьте в закладки постоянную ссылку.

Добавить комментарий

Ваш e-mail не будет опубликован. Обязательные поля помечены *