• Home
  • Situated learning agents (2016)

Situated learning agents (2016)

Situated agents must be able to interact with the physical environment that are located in with their conversational partner. Such an agent receives information both from its conversational partner and the physical world which it must integrate appropriately. Furthermore, since both the world and the language are changeable from one context to another it must be able to adapt to such changes or to learn from new information. Embodied and situated language processing is trying to solve challenges in natural language processing such as word sense disambiguation and interpretation of words in discourse as well as it gives us new insights about human cognition, knowledge, meaning and its representation. Research in vision relies on information represented in natural language, for example in the form ontologies, as this captures how humans partition and reason about the world. On the other hand, gestures and sign language are languages that are expressed and interpreted as visual information.

The masters thesis could be undertaken independently or as an extension of an existing project from the Embodied and Situated Language Processing (ESLP) course. Experience with dialogue systems and good Python programming skills is a plus.

Several projects are available subject of the approval of the potential supervisors. The main thread of the research would be how a linguistically inquisitive robot can update its representation of the world by engaging in dialogue conversation with a human. Sensory observations of a robot may be incomplete due to errors that robot's sensors or actuators introduce or simply because the robot has not explored and mapped the entire world yet. Can a robot query a human about the missing knowledge linguistically with clarification questions? Robotic view of the world is quite different from that of a human. How we can find a mapping between the representations that a robot builds using its sensors and the representations that are a result of human take on the world? The latter is challenging but necessary if robots and humans were to have a meaningful conversation.

Here are some suggested tasks:

A Lego robot, a miniature environment with blocks in a room

  • Online linguistic annotation of objects and situations that a robot discovers "Please tell me. What is this? And this?"
  • The ability to reason about the discovered objects (i.e. creating more complex propositions from simple ones) using some background knowledge "Aha, this is a chair... so I would expect to find a table here as well."
  • Extracting the ontological information used in the previous task from external text resources (e.g. Wikipedia).

Microsoft Kinect or Microsoft robot studio, a table situation with objects

  • Learning of spatial relations between objects on the table in interaction with humans (using, for example, the Attentional Vector Sum Model of Regier and Carlson)
  • Integrating and testing the effects of adding non-spatial features (the influence of dialogue and the knowledge about the objects) in the learning model.

Generating route descriptions in a complex building

  • How to generate route descriptions that provide the right kind of information so that a person finds the objects or location referred to?
  • Using a map of a complicated building (DG4) and representation of salient features in the building build a computational model that would generate such descriptions.
  • Connect that system with a dialogue system and explore the interaction of referring expressions with the structure and the content of dialogue.

Grounded meaning representations

  • Work towards a novel model of grounded meaning representations and validate them in an experiment such as Roy (2002) and others
  • How can information from vector space models be integrated with perceptual information?
  • What are good and effective models of information fusion: interaction between different dimensions of meaning, for example, how to incorporate world knowledge with perceptional meaning to deal with spatial cognition cases described in the work by Coventry and our own work

Earlier project (which this project could build on)

Supervisor(s)

Simon Dobnik and other members of the Dialogue Technology Lab; for extracting ontological information also members of the Text Technology Lab

To the top

Page updated: 2016-11-15 11:08

Send as email
Print page
Show as pdf

X
Loading