Communication technologies have come a long way over the last couple of decades. Some of the craziest innovations involve what AI is now capable of, like real-time translation.
Imagine being able to speak to anybody on Earth despite language barriers, in real time. In a globally connected world, computational linguistics will have to build better tech that can actually serve a wider variety of people in this sort of way. Translation is going to play a huge role, since over half of internet content is supposedly in English, but only 20% of the world’s population have any English skills. And while there are some new products that are doing real-time translation, usually it’s for between 20-40 languages, which is such a small fraction of the more than 7,000 languages being spoken all over the world today.
Artificial intelligence and machine learning have been and will continue to be taking over all kinds of industries and processes. But the area of language and linguistics has always been a tricky one – human language is crazy complicated, even for computers. Meaning is constantly evolving, context shifting, and it can take a lot even for a computer to identify language patterns and logic. As Stanford University Professor John McCarthy put it, “natural language does not have a full set of rules of inference.” Data, data, and more data is how AI functions. But how can machines be built that fully understand all human language, when there are no strict rules in place, and what few rules there are vary across languages and even dialects?
Progress is happening fast. Just several years ago, AI struggled to understand the context of a full sentence, prohibiting language fluency. Early systems broke up sentences into chunks to interpret individual words, disconnecting meaning from them entirely. But the day of completely reliable real-time translation could still be further off than you might think.
There are all kinds of bumps in the road that could mess up a computer’s ability to understand human language. Think different dialects of the same language – what if the technology fails to understand a thick regional accent? What happens when a single language has hundreds of dialects? If only some of those dialects are used to train the software, it can still be hard to use for people who speak the same language, just not in the same exact way.
Automatic speech recognition is a huge product of computational linguistics, and we all know it’s far from perfect at this point. Sometimes Alexa or Siri simply doesn’t understand you or can’t give you the answers you want. And it really depends on the specific technology for why this could be – biases in the data used to train the technology, the software’s interpretation of that data, or simply the difficulties around a computer not being as adaptable as a human on the fly.
Have you ever tried to change your flight, or contact your pharmacy, and you just can’t get a hold of a human person no matter how hard you try? And the IVR just won’t understand what you have to say no matter how many times you say it. Or you use a speech-to-text tool and it catches a string of funny words that are completely wrong. These are pretty inconsequential examples, but where AI is being used in more serious circumstances, some of these mistakes can be important and difficult to notice. AI is notorious for being bad at negation – and considering “did” and “didn’t” are opposites, it has to be able to identify the right one to get the meaning correct. Even if it’s hard for a human ear to pick up this difference, we make up for it with inferences on the context, tone, or other information. The human brain doesn’t really need the clearest set of data possible in order to understand something, and we still miscommunicate with each other all the time.
When it comes to real-time translation, things can get even weirder. With phrases like idioms, often one-to-one translations are just not accurate. In English, if you’re talking about “getting cold feet” in reference to being nervous about something, how would an AI go about translating that to another language? The tech has to actually pick up the context of idioms and maybe even translate into equivalent idioms, if they even exist. The examples of weird idioms are endless, and so many of them make no sense based on words alone; you just know them from speaking a language for a while. And what happens if you’re actually talking about your feet being cold, in the literal sense? How would an AI understand that context to know you mean it idiomatically or literally? Translation models are sort of designed to pick up on it for the most part, but it doesn’t always work.
The progress that translation technology has made in recent years is really impressive for casual use, especially on the internet. Google, Microsoft, Facebook – tech companies and social media have implemented translation tools for a long time now. But for serious legal or medical documentation, AI still isn’t good enough to be trusted just yet. Human translators are still doing it better. Some really weird stuff can happen when mistakes are made by AI – for example, recently, the name of Chinese leader Xi Jinping turned up as a curse word when posts were translated from Burmese to English on Facebook. The name was missing from the Burmese language database model, and in the system’s attempt to replace it with similar syllables, things went offensively wrong.
As we all know, the more data, the better. There’s tons of knowledge shared by many people when it comes to English to Spanish translation, for example, so you can get fairly accurate results most of the time from common translation tools. But what if you’re trying to translate Burmese to Finnish? Well, that could go weirdly. Sometimes the systems are trained with an intermediary language like English to help make the process easier to understand for the computers, but this solution can really mess up meaning too.
There are fields of computer science and linguistics designed to try and tackle these issues, such as Natural Language Processing. This area is ultimately devoted to getting computers to actually understand human language in a way that allows them to communicate more closely to how humans do. This is accomplished by using huge amounts of language data with all of its complexities. NLP includes simple tasks like a short command, or highly complex tasks like getting a computer to actually comprehend an entire text like a poem. This field has been studied for half a century now, and there are still so many problems to solve. As it turns out, learning language is hard, for everyone from children, to adult learners of a second language, to scientists, to literal computers.
One area of computational linguistics in which it seems AI can beat humans is in the study of dead languages. Studying and comprehending a dead language is a difficult process for many reasons – minimal records, a lack of relative languages to be compared to, or an entirely different structure (think something like a lack of spaces between words). In 2020 MIT’s Computer Science and Artificial Intelligence Laboratory made major progress in developing an AI that can decipher a lost language without having to understand anything about related languages in advance. The system is also capable of determining the relationship between two languages. This tech so far exceeds human capabilities to interpret lost languages that it will open many doors for academics studying relics of ancient communication, and hopefully speed up these processes.
Some languages take human linguists decades to figure out. We still haven’t determined where certain languages even came from. But if AI can be programmed to pinpoint what a dead language is related to or what it’s saying, maybe it’s not crazy to think real-time translation will also evolve in the not-too-distant future to become smarter than humans. Although, it tends to be believed that by that point, literally everyone from doctors to lawyers will be replaced with AI as well. Human language, and how we communicate, is seriously that complicated!