Research in DTL encompasses the intersection of dialogue (human-machine or human-human, spoken or multimodal) and language and communication technologies (dialogue systems, mobile phones, etc.). We focus especially on formal models of dialogue, dialogue technology applications, and dialogue corpora and analysis.
Semantic coordination in dialogue: We are currently working to develop theories of how humans coordinate their language and learn new language through interaction. We aim to investigate the possibilities of adapting machine learning techniques (currently oriented towards training on large datasets) to incremental learning on the base of input gained from dialogic interaction. Eventually, we hope to build dialogue systems that can adjust their language to users, and learn language from users.
Type-theoretical models of dialogues: We are developing TTR (Type Theory with Records), a mathematical model for dialogues between agents (human beings and computers). TTR is a type theoretical system developed for the analysis of natural language, in particular from the perspective of interaction and learning. We have recently developed a probabilistic extension of TTR which is designed to deal with gradience and vagueness phenomena in natural language.
Meaning and perception in dialogue: Using the TTR model and the concept of semantic coordination in dialogue, we are developing a model of how the meanings of spatial and other perception-related vocabulary can be learnt from situated interaction. This includes an account of the relation between perception and language and thus addresses the "symbol grounding problem" in Artificial Intelligence. A key component here is the idea of using statistical classifiers to model perceptual aspects of meanings.
Information State Update approaches to Dialogue Systems. We use the term information state to mean, roughly, the information stored internally by an agent, in this case a dialogue system. A dialogue move engine, or DME, updates the information state on the basis of observed dialogue moves and selects appropriate moves to be performed. TrindiKit is a toolkit for building and experimenting with dialogue move engines and information states. Currently, TrindiKit is being revised and reimplemented in Python, in close collaboration with Talkamatic AB. The resulting software will be available as open source.
Cognitive load management in in-vehicle dialogue systems: Dialogue systems which minimize user distraction (cognitive load and head-down time) and increase safety when driving. As part of this work, we have put together a simulator platform for R&D on in-vehicle dialogue systems, including a driving simulator, eye-trackers and equipment for measuring cognitive load.
Dialogue systems in Augmented and Alternative Communication (AAC): We are developing and testing dialogue systems and conversational systems for helping people with communicative disabilities to interact and communicate. One example is a robot that act as an interactive toy for children with severe communicative disabilities, which is developed and evaluated in close collaboration with DART, the Centre for AAC and AT at Queen Silvia Children’s Hospital.
The Spoken Web: The Spoken Web (aka the Voice Web) refers to the the combination of web technology, speech technology and language technology that will eventually open up the speech channel to the World Wide Web. Since the Web is necessarily very heavily based on standards, two CLT members have joined the W3C Voice Browser Working Group (VBWG) and are actively involved in the development of standards for the Spoken Web, in particular State Chart XML (SCXML). The open source TrindiKit implementation referred to above may also be ported to a variant of SCXML.
We work on developing tools and methods for recording and analysing mobile communication, dialogue system interaction, human-human dialogue, and speech.