In the 1990’s language technology underwent a revolution. Up until then it was a field concerned largely with toy applications employing exact linguistic techniques based on linguistic theory using mathematical logic and discrete mathematics. During the 1990’s it became a field concerned with large datasets and large coverage applications using statistical approximative techniques with little relation to linguistic theory.

While this created a possibility for large-scale applications with commercial viability, the beginning of the 21st century has brought an increasing perception that on the one hand brute “theory-lite” statistics will not meet the challenges for more advanced language technology applications (for example systems that will learn to adapt their use of language to different users and situations). On the other hand exact linguistic theories which do not take account of statistical aspects of language use, learning and change will not be able to account for the kind of phenomena which are becoming of increasing theoretical interest such as how speakers adapt to each other’s language use in a dialogue or how written language is adapted to different contexts. For example, since the turn of the century there has been increasing awareness that the meaning of language (semantics), a traditional application of exact logic-based approaches, is of great importance for the large scale spoken language and written language applications which are normally the preserve of statistical data-driven approaches.

The research community at CLT contains experts of international distinction representing both approaches and with long experience of research collaboration with each other.