Forecast of future language technologies

LITHME has published an open access report ‘The Dawn of the Human-Machine Era. A Forecast of New and Emerging Language Technologies’ that describes a range of technologies that will soon transform the way we use and think about language.

Accessible to a wide audience, the report brings together insights from specialists in the fields of language technology and linguistic research. It was authored by 45 researchers from all eight working groups of LITHME and edited by LITHME’s Chair Dave Sayers (University of Jyväskylä, Finland), Vice-Chair Sviatlana Höhn (University of Luxembourg), and the Chair of LITHME’s Computational Linguistics working group Rui Sousa Silva (University of Porto, Portugal).

The forecast report describes the current state and probable futures of various language technologies – for written, spoken, haptic and signed modalities of language. It is a result of unique collaboration, claims Sviatlana Höhn. ‘LITHME brings together people from different directions in language work who would normally not speak to each other. We see the first results of this exchange of ideas in our forecast report: we were able to collect a variety of opinions and facts into one document produced by researchers who would otherwise never work together on one publication. It helps us to learn from each other, from other communities.’

Based on current and foreseeable developments in technology, two imminent changes to human communication are outlined in the report: speaking through technologies and speaking to technologies.

The former implies that wearable devices will actively participate in our conversations. ‘Soon we will not stare at mobile phones in our hands; that information will appear in front of our eyes from tiny eyepieces,’ describes Dave Sayers. ‘Combined with new intelligent earpieces, we will see and hear extra information about the world around us: basic stuff like travel directions, and more advanced content like auto-translations of people speaking other languages. Our own words will be amplified, clarified, translated and subtitled as we speak; and other people will see and hear that in their eye and ear tech.’

But not only will technology mediate what we see, hear and say in real time – we will also speak to chatbots on screens, and to lifelike characters in next-generation virtual reality. ‘These will be far smarter than chatbots today, ready for complex conversations – helping think through problems, discussing plans, consoling disappointments and celebrating successes,’ Sayers states, adding: ‘All this has huge implications for language.’

With exciting possibilities, challenges will come too. The report shines a light on critical issues such as inequality of access to technologies, privacy and security, and new forms of deception and crime.

‘We will face the usual issues over who can afford the latest upgrades. These devices may also work less well in some languages, or not at all in others,’ Sayers cautions.

‘We want to challenge people to think carefully about future tech. Not everyone will benefit equally from these advances, and some of us will be left far behind. Progress is usually fastest in the world’s bigger languages, like English or Chinese. Meanwhile, sign languages are much more complicated for machines to understand, so progress will be much slower still. We are hoping to encourage our readers to keep an eye out for inequalities as they see these fancy new gadgets appear – to sense inequalities amid the marketing buzz.’

Read the report here:

Scroll to Top
Skip to content