How do language technologies impact on how we think about language? Do digital technologies destabilize our traditional orthographic and grammatical standards? Or is it in fact the opposite – that language technologies in the age of the human-machine era will bring about new language hierarchies and an increase in fixed language norms? What can we learn from human-machine interaction with regards to human culture and epistemologies of science?
These are some of the questions we are concerned with in LITHME’s Working Group 6 (Ideologies, Beliefs, Attitudes). The Working Group consists of researchers from the fields of sociolinguistics, computer linguistics, linguistic anthropology, cognitive sciences, and language philosophy. During the first grant period, the working group hosted several external talks by academics and stakeholders from technology companies resulting in some very lively and thought-provoking discussions.
Overall, we understand our Working Group as a project of ‘upskilling’ – we enjoy learning from each other and our discussions with stakeholders. This interactive learning and mutual upskilling have already generated many interesting findings. These findings are helping us in shaping the focus of our research group. It has become clear to us, for example, how relevant social and gendered biases in language data sets are and how they are reproduced through artificial intelligence (AI) algorithms. This is an interesting and relevant finding, and also helps us gain a clearer understanding of the nature of standard language biases and how these may hinder multilingual interaction and translingual practices.
One of our findings has been that commercial interests play an important role in the language technology domain, but this aspect is rarely considered in discussions about the future of language development. Sociolinguistic hierarchies in a multilingual global world are supported and reconfigured through language technologies and these are commercially driven and have been invented not for linguistic justice nor for political projects but to make money.
This Working Group is also interested in the question of how human and machine interactions contribute to new configurations of interaction, and what this tells us about humans and their relationship to material environments and how humans – including tech people – attribute agency to technical tools and anthropomorphize, for example, word embedding technologies and algorithms.
In future discussions, the Working Group will be considering the issues of the social ideologies of programmers and tech-companies, typically entailing Western discourses on ideals of social equality, democracy and the avoidance of ‘harm’, often understanding statistical balance as ‘equality’.
Besides talks and discussions, members of the Working Group have drafted literature reviews on ‘ideologies in corpus design and corpus exploitation’, ‘language ideologies and its relation to media technologies’, and ‘expert and lay beliefs about machines’ in smaller sub-working groups. The insights, literature reviews and questions we have developed so far will form the basis for a joint chapter on language ideologies in the age of the human-machine era.
Working group 6 consists of three sub-groups
1) Group on Ideologies in Corpus Design and Corpus Exploitation
Naomi Truan, Glenda Leung, Axel Bohmann, Philipp Meer, Bettina Migge
2) Group on language ideologies and its relations to media technologies
Iker Erdocia, Susanne Mohr, Britta Schneider, Maggie Glass
3) Group on expert and lay beliefs about machines
Miriam Schmidt-Jüngst, Theres Festers, Stephen Cowley, Auli Viidalepp, Barbara Lewandowska-Tomaszczyk, Britta Schneider, Rasmus Gahrn-Andersen
Blog post authors