A novel architecture to improve syntactic analysis

Demonstrating a significant improvements over the previous state-of-the-art results, James Henderson and Alireza Mohammadshahi of the Natural language understanding group propose a novel approach. Their work was presented during the 16th Conference of the European Chapter of the Association for Computational Linguistics (EACL) on April 21, 2021.

So called self-attention models have been hugely successful in a wide range of natural language processing (NLP) tasks, such as automatic summarization, translation, named entity recognition, relationship extraction, sentiment analysis, speech recognition, and topic segmentation. Using a refinement model, member of the Natural language understanding group, James Henderson and Alireza Mohammadshahi demonstrate the power and effectiveness of Recursive Non-autoregressive Graph-to-Graph Transformer (RNGTr) architecture on several dependency corpora. Their aim is to improve the accuracy of syntactic analysis for a corpus of several languages. To achieve this, they propose a novel architecture for the iterative refinement of arbitrary graphs (RNGTr) that combines non-autoregressive edge prediction with conditioning on the complete graph.


More information

- Recursive Non-Autoregressive Graph-to-Graph Transformer for Dependency Parsing with Iterative Refinement article

- EACL conference