Customize

1. You can enlarge the whole site (character size and with) by using the browser function to change characters size.

2. To your right it is possible to change the character size, font, spacing, characters and letters as well as adjust the colours. This will have consequences for the appearance of the whole website design. It will effect all pages at the University  of Gothenburg's website. The changes will remain the next time you log in. (To save your changes the browser must allow cookies.)

*Changes has been made to the look of this website


  • FinishedProject

FinishedProject

Ticnet dialogue agent for social media platforms

Goal

The goal of the project is to evaluate and develop further an existing TDM interface to the Ticnet ticket booking service. The app communicates with users through text interaction in social media services, and optionally also using spoken interaction in a smartphone app.

Problem description

Talkamatic have developed a rough prototype for a Ticnet application which allows written (in a terminal window) or spoken (on a smartphone) interaction. Ticnet want a more extensive prototype which communicates with users through text interaction in social media services. The prototype should be deployed by a test group and evaluated using a variety of methods including user surveys.

The role of Talkamatic will be (1) technical support concerning TDM application development and (2) to formulate requirements and give feedback on ideas and prototypes.

The role of Ticnet will be (1) technical support concerning their APIs and services, and (2) to formulate requirements and give feedback on ideas and prototypes.

Recommended skills

Python programming. Familiarity with other lanuages (C++, Java,PHP) is a plus. Familiarity with the concepts of APIs as well as guidelines, tools and processes for software development is also a plus.

Supervisors

  • Staffan Larsson(FLoV, GU)
  • External supervisor from Talkamatic AB.
  • Requirements, feedback, comments from Ticnet.

Ticnet is the leading marketplace in Sweden for events in sports, culture, music and entertainment. Ticnet is since 2004 a wholly owned subsidiary of American Ticketmaster. Ticnet conveys about 12 million tickets/year spread over 25,000 events. Ticnet.se has 1 100 000 unique visitors each month.

The Second Workshop on Action, Perception and Language (APL'2)

Crispin apple © New York Apple Association.

SLTC workshop, November, 13, 2014, Uppsala, Sweden

Workshop programme and proceedings

09:00 - 09:05 Welcome
Session 1
09:05 - 09:35 Francesco-Alessio Ursini and Aijun Huang: Objects and Nouns: Ontologies and Relations
09:35 - 10:05 Simon Dobnik, Robin Cooper and Staffan Larsson: Type Theory with Records: a General Framework for Modelling Spatial Language
10:05 - 10:25 Coffee break
Session 2
10:25 - 10:55 Robert Ross and John Kelleher - Using the Situational Context to Resolve Frame of Reference Ambiguity in Route Description
10:55 - 11:25 Raveesh Meena, Johan Boye, Gabriel Skantze and Joakim Gustafson - Using a Spoken Dialogue System for Crowdsourcing Street-level Geographic Information
11:25 - 11:55 Robert Ross: Looking Back at Daisie: A Retrospective View on Situated Dialogue Systems Development
11:55 - 12:00 Concluding remarks

Situated agents must be able to interact with the physical environment that are located in with their conversational partner. Such an agent receives information both from its conversational partner and the physical world which it must integrate appropriately. Furthermore, since both the world and the language are changeable from one context to another it must be able to adapt to such changes or to learn from new information. Embodied and situated language processing is trying to solve challenges in natural language processing such as word sense disambiguation and interpretation of words in discourse as well as it gives us new insights about human cognition, knowledge, meaning and its representation. Research in vision relies on information represented in natural language, for example in the form ontologies, as this captures how humans partition and reason about the world. On the other hand, gestures and sign language are languages that are expressed and interpreted as visual information.

The Second Workshop on Action, Perception and Language (APL’2) is a continuation of a successful APL workshop held at SLTC 2012 in Lund and is intended to a be a networking and community building event for researchers that are interested in any form of interaction of natural language with the physical world in a computational framework. Example areas include semantic theories of human language, action and perception, situated dialogue, situated language acquisition, grounding of language in action and perception, spatial cognition, generation and interpretation of gestures, generation and interpretation of scene descriptions from images and videos, integrated robotic systems and others. We welcome papers that describe either theoretical and practical solutions as well as work in progress.

Research connecting language and the world is a burgeoning research area to which several international conferences and workshops are devoted. It intends to connect several scientific communities (natural language technology, computer vision, robotics, localisation and navigation). Traditionally, natural language technology has worked separate from the other fields but research in the last 15 years has shown that there exist many synergies between them and that hybrid approaches may provide better solutions to many challenging problems, for example interpretation and generation of spatial language and object recognition. We hope that the APL workshop collocated with the SLTC conference would become a local forum that would lead to new collaborations between computer vision and natural language communities in Sweden.

Image of a Crispin apple courtesy of New York Apple Association, © New York Apple Association.

Workshop organisers

  • Simon Dobnik (University of Gothenburg)
  • Staffan Larsson (University of Gothenburg)
  • Robin Cooper (University of Gothenburg)

Programme committee

Anja Belz (University of Brighton), Johan Boye (KTH), Ellen Breitholtz (University of Gothenburg), Robin Cooper (University of Gothenburg), Nigel Crook (Oxford Brookes University), Kees Van Deemter (University of Aberdeen), Simon Dobnik (University of Gothenburg), Jens Edlund (KTH), Raquel Fernández (University of Amsterdam), Joakim Gustafson (KTH), Pat Healey (Queen Mary, University of London), Anna Hjalmarsson (KTH), Christine Howes (University of Gothenburg), John Kelleher (DIT), Emiel Krahmer (Tilburg University), Torbjörn Lager (University of Gothenburg), Shalom Lappin (King's College, London), Staffan Larsson (University of Gothenburg), Pierre Lison (University of Oslo), Peter Ljunglöf (University of Gothenburg/Chalmers), Joanna Isabelle Olszewska (University of Gloucestershire), Stephen Pulman (University of Oxford), Matthew Purver (Queen Mary, University of London), Robert Ross (DIT), David Schlangen (Bielefeld University), Gabriel Skantze (KTH), Holger Schultheis (University of Bremen), and Mats Wirén (Stockholm University)

Call for papers

We welcome 2 page extended abstracts formatted according to the SLTC templates for LaTeX and Word.

Please submit your abstract as a pdf document with your author details removed through EasyChair here.

The submitted abstracts will be published on the workshop web-page and the authors will be given an opportunity to present their work at the workshop as oral presentations and/or posters (depending on the type and number of submissions).

Following the workshop the contributing authors will be invited to submit full-length (8 page) papers to be published in the CEUR Workshop Proceedings (ISSN 1613-0073) online.

Important dates

  • July, 14: submission opens
  • September, 25 September, 30: extended abstract submission deadline
  • October, 9: notification of acceptance
  • October 13: SLTC early registration deadline
  • October, 30: camera-ready extended abstracts for publication
  • November, 13 09.00 - 12.00: workshop

Contact details

apl@dobnik.net

TTNLS: Programme

Home | Call for papers | Programme committee | Programme | Important dates

All talks will take place in lecture room HC1 at Chalmers Technical University. For details see the attached map below or here. Each talk will be divided into 25 minutes for presentation and 5 minutes for questions. Full papers are available in ACL proceedings.

8:45 - 9:00: Opening remarks
Robin Cooper

9:00 - 10:00: Invited talk: Types and Records for Predication
Aarne Ranta

10:00-10:30: System with Generalized Quantifiers on Dependent Types for Anaphora
Justyna Grudzinska1 and Marek Zawadowski2
1University of Warsaw, Institute of Philosophy, 2University of Warsaw, Institute of Mathematics

10:30 - 11:00: Coffee break

11:00 - 11:30: Monads as a Solution for Generalized Opacity
Gianluca Giorgolo1 and Ash Asudeh2
1University of Oxford, 2University of Oxford & Carleton University

11:30 - 12:00: The Phenogrammar of Coordination
Chris Worth
The Ohio State University

12:00 - 12:30: Natural Language Reasoning Using Proof Technology: Rich Typing and Beyond
Stergios Chatzikyriakidis1 and Zhaohui Luo2
1 2Royal Holloway, University of London

12:30 - 14:00: Lunch

14:00 - 14:30: A Type-Driven Tensor-Based Semantics for CCG
Jean Maillard1, Stephen Clark1, Edward Grefenstette2
1University of Cambridge, 2University of Oxford

14:30 - 15:00: From Natural Language to RDF Graphs with Pregroups
Antonin Delpeuch1 and Anne Preller2
1√Čcole Normale Sup√©rieure, 2LIRMM

15:00 - 15:30: Incremental semantic scales by strings
Tim Fernando
Trinity College Dublin

15:30 - 16:00: Coffee break

16:00 - 16:30: A Probabilistic Rich Type Theory for Semantic Interpretation
Robin Cooper1, Simon Dobnik1, Shalom Lappin2, Staffan Larsson1
1University of Gothenburg, 2King's College London

16:30 - 17:00: Probabilistic Type Theory for Incremental Dialogue Processing
Julian Hough1 and Matthew Purver2
1 2Queen Mary University of London

17:00 - 17:30: Abstract Entities: a type theoretic approach
Jonathan Ginzburg1, Robin Cooper2, Tim Fernando3
1Université Paris-Diderot (Paris 7), 2University of Gothenburg, 3Trinity College Dublin

17:30 - 18:30: Concluding discussion

19:00 - 21:00: EACL Reception

Attachments: 

Workshop on Language, Action and Perception (APL)

Red Rome apple

SLTC workshop, October 25, 2012, Lund, Sweden

Call for papers

The Workshop on Language, Action and Perception (APL) is intended to a be a networking and community building event for researchers that are interested in any form of interaction of natural language with the physical world in a computational framework. Both theoretical and practical proposals are welcome. Example areas include semantic theories of human language, action and perception, situated dialogue, situated language acquisition, grounding of language in action and perception, spatial cognition, generation and interpretation of scene descriptions from images and videos, integrated robotic systems and others. We would also like to welcome researchers from computer vision and robotic communities who are increasingly using linguistic representations such as ontologies to improve image interpretation, object recognition, localisation and navigation.

Programme committee

Johan Boye (KTH)
Robin Cooper (University of Gothenburg)
Nigel Crook (Oxford Brookes University)
Simon Dobnik (University of Gothenburg)
Raquel Fernandez (University of Amsterdam, The Netherlands)
John Kelleher (Dublin Institute of Technology, Ireland)
Staffan Larsson (University of Gothenburg)
Peter Ljunglöf (Chalmers University of Technology)
Robert Ross (Dublin Institue of Technolog, Ireland)

Invited talks

Johan Boye (KTH) and Gabriel Skantze (KTH)

Submission details

We welcome 2 page extended abstracts formatted according to the SLTC templates for LaTeX and Word.

Please submit your abstract as a pdf document with your author details removed through EasyChair here.

The submitted abstracts will be published on the workshop web-page and the authors will be given an opportunity to present their work at the workshop in a form of brief oral presentations followed by a poster session.

Following the workshop the contributing authors will be invited to submit full-length (8 page) papers to be published in the CEUR Workshop Proceedings (ISSN 1613-0073) online.

Important dates

  • 10 September 2012: abstract submission
  • 17 September 2012: extension of abstract submission deadline
  • 8 October 2012: notification of acceptance
  • 22 October 2012: camera-ready abstracts for publication online

Workshop organisation

If you are coming to the workshop, please don't forget to register for SLTC 2012 here. The registration is sponsored by the GSLT and therefore free for all participants.

The workshop will take place on October 25th 2012 in the room E 1145 of the E building of LTH (Lunds tekniska högskola, a part of Lund university) close to the other two SLTC 2012 workshops. You can find a map here.

The room will have a projector and a wifi connection. Eduroam should allow you to connect to the internet. If you don't have an Eduroam account, let us know in advance to request from the SLTC organisers wifi vouchers.

Workshop programme and proceedings

Workshop organisers

Simon Dobnik, Staffan Larsson, Robin Cooper, Centre for Language Technology and Department of Philosophy, Linguistics, and Theory of Science, Gothenburg University

Contact details

name [dot] surname [at] gu [dot] se or apl2012 [at] easychair [dot] org

Image of a Red Rome apple courtesy of New York Apple Association, © New York Apple Association.

Maharani: An Open-Source Python Toolkit for ISU Dialogue Management

Based on the previous TrindiKit implementation of the ISU approach to dialogue management (which used a proprietary Prolog), we are now developing Maharani, an open-source Python-based ISU dialogue manager together with Talkamatic AB. The first release is expected in the spring of 2012.

Funding: DTL internal

Researchers: Staffan Larsson, Sebastian Berlin

Reliable Dialogue Annotation for the DICO Corpus

Our purpose is to annotate seven pragmatic categories in the DICO (Villing and Larsson, 2006) corpus of spoken language in an in-vehicle environment, in order to find out more about the distribution of these categories and how they correlate. Some of the annotations have already been made, by one annotator.

To strengthen the results from this work, we are interested in establishing the degree of inter-coder reliability for the annotations. Also, as far as we know, no attempts have been made to annotate enthymemes (Breitholtz and Villing, 2008), a type of defeasible arguments, in spoken dialogue. A corpus of spoken discourse annotated for enthymemes would therefore be a welcome addition to the resources that are currently available.

Researchers: Jessica Villing, Ellen Breitholtz, Staffan Larsson (supervisor)

Funding: CLT internal

End-of-utterance detection

The current dialogue system used at the Dialogue Lab, GoDiS, depends on cut-off values to control turn-taking. This means that when the user has not spoken for a period of time, the system assumes the user is finished and takes the turn. This can lead to both interruptions and unnecessary long waits for the user.

To solve this problem, the system has to be able to detect when a speaker is finished or when he is just making a pause within his utterance. If the system can reliably detect the users end-of-utterance, it can take the turn more rapidly when the user is finished (and avoid interrupting the user when he/she is not finished).

To detect end of utterance, we assume that the system needs information from several sources: syntactic information, prosodic information and also information state. We will create a statistical language model for end-of-utterance detection, using machine learning. For this we will use the Weka toolkit.

We will attempt to create a model that allows the system to differentiate between user pauses within an utterance, and user pauses at the end of an utterance.

Funding: CLT internal.

Duration: August 2011 - October 2012

Researchers: Kristina Lundholm Fors, Staffan Larsson (supervisor).

 

Lekbot

A talking and playing robot for children with communicative disabilities

En talande och lekande robot för barn med funktionshinder

Project description (in English)

Lekbot is a collaboration between DART, Talkamatic and the Dept. of philosophy, linguistics and theory of science, University of Gothenburg. It is funded by VINNOVA, and runs from March 1, 2010 to August 31, 2011.

The project uses current theory and technology in human communication, computer communication and dialogue systems to develop a toy that is fun and interesting for young people with communicative disabilities. The toy helps the development of dialogical communication, an area that is normally problematic for these children. Children with severe disabilities often have few opportunities to play independently and to interact on equal terms with children without disabilities, and here Lekbot enables children with and without disabilities to interact and learn from each other.

The Lekbot toy developed in the project is a radio-controlled robot that can be used by children with severe physical and/or cognitive disabilities, such as cerebral palsy or autism. The robot is controlled by the child through touch-screen symbols. The symbols are translated into spoken language, so that the touch screen "talks" to the robot and acts as the child's voice. The robot can, in turn, talk to the child using spoken language, and the child can again answer using the touch screen.

The Lekbot project is a development of TRIK.

The robot is built using Lego Mindstorms NXT, and the dialogue system is developed using GoDiS. The project is supported by Acapela, whose speech synthesis is used for both the touch screen and the robot.

Publications

  • A leaflet describing Lekbot (in Swedish): [lekbot-broschyr.pdf]
  • Abstract for poster at the third Swedish Language Technology Conference (SLTC 2010), Linköping 28-29th October 2010: [lekbot-sltc2010.pdf]
  • Paper describing Lekbot and our evaluation, presented at SLPAT 2011, 2nd Workshop on Speech and Language Processing for Assistive Technologies, Edinburgh 30th July 2011: [lekbot-slpat2011.pdf]
  • A news story from Swedish TV4, June 21st 2011: [tv4play, link apparently dead]
  • A news story from Swedish SVT V√§stnytt, May 8th 2014: [svt.se]

Contact: lekbot "at" talkamatic.se

Project description (in Swedish)

Lekbot är ett samarbete mellan DART, Talkamatic och Inst. för filosofi, lingvistik och vetenskapsteori, Göteborgs universitet. Det finansieras av VINNOVA, och pågår 1 mars 2010 till 31 augusti 2011.

Projektet utnyttjar aktuell teknik inom mänsklig kommunikation, datakommunikation och dialogsystem för att förse unga människor med kommunikationssvårigheter med en rolig och spännande leksak, som dessutom hjälper dem att utveckla sin förmåga inom ett område där deras funktionshinder i vanliga fall hindrar dem, nämligen dialog. För barn med stora rörelsehinder finns få möjligheter att leka självständigt och att samspela på lika villkor med barn utan funktionshinder. Genom en leksak som barn kan använda oavsett funktionshinder eller ej, ges barnen möjlighet att samspela och lära av varandra.

I Lekbot utvecklas en radiostyrd robot som kan användas av barn och ungdomar med svåra fysiska och /eller kommunikativa funktionshinder, såsom cerebral pares eller autism. Roboten styrs av barnet genom att peka på symboler på en pekskärm. Symbolerna översätts till talat språk, så att pekskärmen "pratar" med roboten och berättar vad barnet vill. Roboten å sin sida kan ställa frågor tillbaka, som barnet svarar på genom att peka på symboler.

Lekbotprojektet är en vidareutveckling av TRIK.

Roboten är byggd med Lego Mindstorms NXT, och dialogsystemet utvecklas i GoDiS. Projektet stöds av Acapela, vars talsyntes används för både pekskärmen och roboten.

Publikationer

  • En broschyr som beskriver Lekbot: [lekbot-broschyr.pdf]
  • Abstract för poster på den tredje Swedish Language Technology Conference (SLTC 2010) Linköping 28-29 oktober 2010 (på engelska): [lekbot-sltc2010.pdf
  • En artikel som beskriver Lekbot och vår evaluering, presenterad på SLPAT 2011, 2nd Workshop on Speech and Language Processing for Assistive Technologies, Edinburgh 30 juli 2011: [lekbot-slpat2011.pdf]
  • Ett nyhetsinslag i TV4, 21 juni 2011: [tv4play, l√§nken √§r d√∂d]
  • Ett nyhetsinslag i SVT V√§stnytt, 8 maj 2014: [svt.se]

Kontakt: lekbot "at" talkamatic.se

GPCC: Personcentrerad vård vid pediatrisk diabetes (och obesitas)

Detta projekt är inte främst ett språkteknologiskt projekt. Däremot är insamling, transkribering och kodning av samtal en del av projektet, och dessa aktiviteter utförs inom DTL:s ramar.

Project coordinator:
Ewa Wikström, Företagsekonomiska institutionen.

Scientific management leaders:
Gun Forsander, Institutionen för Kliniska Vetenskaper/Barnmedicin, Inga-Lill Johansson, Företagsekonomiska institutionen, Christian Munthe, Institutionen för Filosofi, Lingvistik och Vetenskapsteori, Marianne Törner, Arbets- och Miljömedicin.

Objective (including the person-centred angle):
Det övergripande syftet med projektet är att genom tvärvetenskapliga team ta fram och utvärdera individ- och familjeanpassade behandlings¬program som identifierar och använder patientens syn på sin situation i form av behov, resurser, vilja och preferenser. Detta ger möjlighet till individualisering och optimering av rehabilitering och behandling med målet att öka patienttillfredsställelsen och effektiviteten i behandlingen av barn och ungdomar med diabetes och fetma som tillhör minoritetsgrupp.

Expected results/Specific research questions:
Hur kan familjens och patientens syn på sin situation i form av behov, resurser, vilja och preferenser identifieras och användas i behandling och vård? Finns det etniska och kulturella skillnader i detta avseende?
Hur kan individ och familjeanpassade alternativ utvecklas i tvärvetenskapliga vårdgivarteam?
Hur kan patientberättelsen ligga till grund för en förändrad vårdpraktik avseende samverkan mellan sjukhusvård, öppenvård och förskola/skola?

Method (-s):
Observations- och intervjumetoder används för datainsamling. Djupintervjuer med patienter och deras anhöriga. Videoinspelning av samtal/möten mellan patient/anhöriga och vårdpersonal. Förändringslaboratorier används för att åstadkomma önskad förändring av vårdpraktiken.

Timetable – when will the project start and be finalised, milestones:
Pilotstudie 2010 april – november
Huvudstudie 2010 december – 2012
Uppföljning av effekter fortgår 2013
Slutrapport i början av 2014

Project homepage

SAICD

 

SAICD

Semantic analysis of interaction and coordination in dialogue

VR project 2009-1569, 2010-2012

Robin Cooper, University of Gothenburg
Jonathan Ginzburg, King's College London
Staffan Larsson, University of Gothenburg

The aim of the project is to integrate aspects of traditional model theoretic semantics developed in the main for sentence semantics and discourse with recent developments in dialogue analysis.  The project aims to give a theoretical account of how dialogue participants manage to remain coordinated during relatively intricate linguistic interaction. We aim to synthesize work on a key mechanism of interaction,  namely repair – clarification questions (CQs), self-corrections, hesitations etc – with work in formal semantics that has modelled many of the central elements of natural language meaning, such as quantifier terms, anaphora, and attitude reports. This is a two way street: the perspective from interaction will provide useful extra cognitively--based evidence for the field of semantics -- a domain overfull with theories underdetermined by evidence, while helping solve age--old puzzles; conversely, in tackling the intricate linguistic phenomena analyzed by semanticists theories of interaction can aspire to provide comprehensive theories of cognition.  We will focus on dialogic aspects of some classical concerns of formal semantics: quantification, anaphora and intensionality.

Project homepage

X
Loading