November 8, 2024

An analysis of 300,000 conferences reveals that the influence of generative artificial intelligence goes beyond writing and has already conquered oral expression.

Researcher Ezequiel López was recently at an academic conference and was surprised to hear the speakers’ insistence on certain words , such as delve, which means to delve deeper. Another researcher from the Max Planck Institute for Human Development (Berlin) had a similar feeling: there were some words that were suddenly repeated in presentations that had barely been heard before.

There was already some research into how curious words had repeatedly crept into scientific articles, sentences or paragraphs written by ChatGPT or other artificial intelligences . Could it be that humans were already now verbally repeating words popularised by machines? They decided to look into it. 

The first challenge was to find enough recent presentations. They gathered some 300,000 videos of academic conferences and created a model to check the frequency of appearance of certain words over the last few years: “Our question is whether there could be an effect of cultural adoption and transmission, that machines are changing our culture and then it spreads,” says López.

The answer is yes. In 2022, they detected a turning point in previously rarely heard English words such as delve , meticulous , realm or adept . Iyad Rahwan, professor at the Max Planck Institute and co-author of the research, says: “It’s surreal. We have created a machine that can speak, that learned to do so from us, from our culture. And now we are learning from the machine. It is the first time in history that a human technology can teach us things so explicitly.”

It is not so strange for humans to repeat new words that we have just learned. And even more so if they are non-native speakers, as is the case in a significant part of the sample in this case. “I don’t think it is a cause for alarm because in the end it is democratising the ability to communicate. If you are Japanese and you are a world leader in your scientific field, but when you speak in English at a conference you sound like an American from kindergarten, it also generates biases regarding your authority,” says López.

ChatGPT allows these non-native speakers to better capture nuances and incorporate words they didn’t use before. “If you’re not a native English speaker and you go to the cinema tomorrow and there’s a new word that surprises you, you’re likely to adopt it too, like wiggle room in Oppenheimer ; or lockdown during the pandemic,” says López. But there is one caveat, this researcher points out. It is very particular that the words adopted at these academic conferences are not nouns that help describe something more precisely, but rather instrumental words like verbs or adjectives.

There are two curious consequences of this adoption. First, since it has become clear in the academic world that these words are ChatGPT creations , they have become cursed: using them can be frowned upon. “I am already seeing this in my own lab. Every time someone uses ‘delve,’ everyone instantly catches on and makes fun of them. It has become a taboo word for us,” says Rahwan.

The second consequence may be worse. What if, instead of making us adopt words at random, these machines were able to put more connoted words into our heads? “On the one hand, what we found is fairly harmless. But this shows the enormous power of AI and the few companies that control it. ChatGPT is capable of having simultaneous conversations with a billion people.

This gives it considerable power to influence how we see and describe the world,” says Rahwan. A machine like this could determine how people talk about wars like those in Ukraine or the Middle East, or how they describe people of a particular race or apply a biased view to historical events.

At the moment, due to its global adoption, English is the language where it is easiest to detect these changes. But will it also happen in Spanish? “I have wondered. I suppose something similar will happen, but the bulk of science and technology is in English,” says López.

It also affects collective intelligence

Generative AI may have unexpected consequences in many areas other than language. In another study published in Nature Human Behaviour, López and his co-authors have found that collective intelligence, as we understand it, is in danger if we start using AI on a massive scale. Collaborative code sites such as GitHub or Stack Overflow will lose their role if each programmer uses a bot to generate code. There will no longer be a need to consult what other colleagues have done before, or to improve it or comment on it.

For Jason Burton, a professor at Copenhagen University of Business and co-author of the paper, “Language models don’t mean the end of GitHub or Stack Overflow. But they are already changing how people contribute to and engage with these platforms. If people turn to ChatGPT instead of searching for things on public forums, we’re likely to continue to see a decline in activity on those platforms, because potential contributors will no longer have their audience.”

Programming is just one possible victim of AI. Wikipedia and its writers may become mere reviewers if everything is written by a bot. Even education is something that should be revised, according to López: “Let’s imagine that, in the current educational system, teachers and students are increasingly relying on these technologies; some to design questions and others to find the answers.

At some point we will have to rethink what function these systems should have and what our new efficient role in coexisting with them would be. Above all, so that education does not end up consisting of students and teachers pretending on both sides and performing a play for eight hours a day.”

These language models are not just the promise of something bad for collective intelligence. They are also capable of summarizing, aggregating, or mediating complex processes of collaborative deliberation. But, as Burton points out, caution must be essential in these processes to avoid coincidence in groupthink: “Even if each individual capacity is enhanced by using an app like ChatGPT, this could still lead to bad results at the collective level.

If everyone starts relying on the same app, it could homogenize their perspectives and lead to many people making the same mistakes and overlooking the same things, rather than each person making different mistakes and correcting each other.” That is why, with their study, these researchers call for reflection and possible political interventions to allow for a more diverse field of language model developers and thus avoid a landscape dominated by a single model.

Leave a Reply

Your email address will not be published. Required fields are marked *