Customize

1. You can enlarge the whole site (character size and with) by using the browser function to change characters size.

2. To your right it is possible to change the character size, font, spacing, characters and letters as well as adjust the colours. This will have consequences for the appearance of the whole website design. It will effect all pages at the University  of Gothenburg's website. The changes will remain the next time you log in. (To save your changes the browser must allow cookies.)

*Changes has been made to the look of this website


  • Home
  • Seminar: John Kelleher – Attention models in deep learning for machine translation

Seminar: John Kelleher – Attention models in deep learning for machine translation

SEMINAR

In the last number of years deep learning models have made a significant impact across a range of fields. Machine Translation is one such area of research. The development of the encoder-decoder architecture and its extension to include an attention mechanism has led to deep learning models achieving state of the art MT results for a number of langauge pairs. However, an open question in deep learning for MT is what is the best attention mechanism to use. This talk will begin by reviewing the current state of the art in deep learning for MT. The second half of the talk will present a novel attention based encoder-decoder architecture for MT. This novel architecture is the result of collaborative research between John Kelleher, Giancarlo Salton, and Robert J. Ross.

Date: 2016-03-11 13:15 - 15:00

Location: T307, Olof Wijksgatan 6

Permalink

add to Outlook/iCal

To the top

Page updated: 2016-02-24 09:17

Send as email
Print page
Show as pdf

X
Loading