logo_en

The Cambridge Dictionary team choses “hallucinate” as its Word of the Year 2023

Facebook
Twitter
LinkedIn

A word that may determine the future of AI

As language service professionals for over two decades now, we were certainly not surprised here at ALPHABET that the Cambridge Dictionary chose “hallucinate” as Word of the Year 2023. This term refers to a particular feature of Artificial Intelligence (AI) systems and the so-called Large Language Models (LLMs), where the information generated by an AI system often ranges from being inaccurate to downright false. 

So, what does such a system do? Its main purpose is to answer questions on any subject, provide instructions and information of any nature, generate text or even perform specific tasks, such as creating a summary or correcting an existing text (based on specific stylistic criteria specified by the user), creating legal content or even generating programming code to perform various operations, even to create a website from scratch. As expected, an LLM can also translate; but more on that below. 

The system uses an algorithm, which has been fed (“trained”) with a huge amount of linguistic and other content and tries (using various mathematical models), to guess the word that is most likely to follow its previous word in a sentence, based also on the context. 

Since the system does not understand the meaning of a sentence or a whole text but simply tries to guess the right sequence of words, it often “hallucinates”. 

In other words, it presents false information as real, and in a manner that is quite easy to believe that such information is valid and correct. The generated information may, of course, also be a translation requested by the user. 

Meanwhile, in an effort not to miss the technological developments that take place in lightning speed, many companies that have been offering Computer-Assisted Translation (CAT) tools for decades have recently rushed to incorporate AI and LLMs into their products. 

For 27 years, ALPHABET has been closely monitoring all technological developments in the translation industry. We study current options, test various translation software products extensively and systematically, and always strive to select solutions that best serve our primary goal: Quality of Translation. 

In this respect, “hallucination” is just one of the key issues when using large language models. Other issues include the insufficient availability of content in less widely used languages (such as Greek) or even the use of poor-quality content. 

Given the above, we urge our clients to be particularly cautious when receiving offers for translation services at unusually low prices from providers who offer such prices often suggesting the use of large language models (such as the well-known ChatGPT), especially when it comes to sensitive or demanding (advertising, promotional, scientific, etc.) content. We recommend similar caution and vigilance to our clients when they attempt to use machine translation and/or ΑΙ systems to translate the above content in-house. The final cost might end up unexpectedly high!