RikiNet: Reading Wikipedia Pages for Natural Question Answering.

Image credit: Unsplash

Abstract

Reading long documents to answer open-domain questions remains challenging in nat-ural language understanding. In this paper, weintroduce a new model, called RikiNet, whichreads Wikipedia pages for natural question an-swering. RikiNet contains a dynamic para-graph dual-attention reader and a multi-levelcascaded answer predictor. The reader dynam-ically represents the document and questionby utilizing a set of complementary attentionmechanisms. The representations are then fedinto the predictor to obtain the span of the shortanswer, the paragraph of the long answer, andthe answer type in a cascaded manner. Onthe Natural Questions (NQ) dataset, a singleRikiNet achieves 74.3 F1 and 57.9 F1 on long-answer and short-answer tasks. To our bestknowledge, it is the first single model that out-performs the single human performance. Fur-thermore, an ensemble RikiNet obtains 76.1F1 and 61.3 F1 on long-answer and short-answer tasks, achieving the best performanceon the official NQ leaderboard

Dayiheng Liu
Dayiheng Liu
Ph.D. Student

My name is Dayiheng Liu (刘大一恒).

Jiancheng Lv
Jiancheng Lv
Dean and professor of Computer Science of Sichuan University

My research interests include natural language processing, computer vision, industrial intelligence, smart medicine and smart cultural creation.

Related