Photon Dodo

@photon-dodo

Public team

Community (0)
No community contribution yet

Joined on Aug 18, 2020

  • Authors Rishika Bhagwatkar Khurshed Fitter Brief Outline In this paper, they conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of the basic encoder–decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. Introduction Most of the proposed neural machine translation models belong to a family of encoder–decoders. The only path of information transfer between the encoder and the decoder is a fixed length vector a.k.a. encoding.
     Like  Bookmark
  • Authors Rishika Bhagwatkar Khurshed Fitter Brief Outline They use a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector. Below is a table summarising the approach:
     Like  Bookmark