Nearly twenty years ago now, I headed off on my first, and to date only trip across the Atlantic, to San Francisco for the American Chemical Society Spring Meeting. I needed to find a job, as my position at the university was being abolished along with the rest of the chemistry department, and I couldn’t afford to limit my search to the UK. As luck would have it, I found one, and certainly not the one I was expecting either. Wandering round the exhibitors’ stands, I ran into a guy offering translations of patents from Japanese to English: now I happened to know some Japanese, and I’d even done a few “summaries” of Japanese patents although I wasn’t really fast enough to make it worth my while. But what this guy was offering was slightly different: he was developing a machine translation (MT) system for Japanese into English technical texts, and a native speaker of English with a PhD in chemistry and a working knowledge of Japanese was like a godsend for him!
I’m reminded of that spring week in San Francisco by a blog post by Marta Stelmaszak on “Paradoxes in a freelance translator’s career”, and I’ve shamelessly taken one of her paradoxes as the title (and indeed the subject) of this post. But let’s head back to 1997 for a moment. Machine translation back then was very different to what we see today. There was no Google Translate, because there was no Google! It would only be founded the following year. The “World Wide Web” was just discovering colour and images, but there was nothing like the wealth of freely available information that we have today. The statistical algorithms used by Google Translate since 2007 would have been impossible to implement back then – there were not enough electronically readable pairs of translated texts and, frankly, there wasn’t enough cheap computing power either. But there was machine translation, as I discovered when I got back to the UK and ploughed through as many books on the subject as I could find in my university’s library.
The Georgetown–IBM experiment in 1954 – the electronic translation of sixty Russian sentences into English – is widely regarded as the first successful demonstration of machine translation. At the time it was widely thought that the “problem” of machine translation would be solved, and human translators replaced, within a decade or so. This “within a decade or so” is a recurring theme in the history of MT. As is happens, within a decade or so, the U.S. government set up ALPAC, the Automatic Language Processing Advisory Committee, whose recommendations in 1966 were squarely in favour of investment to help human translators, not machines. Although starved of government funding, MT continued to be developed in the private sector with the birth of SYSTRAN, which would be adopted by the U.S. Air Force in 1973 and the European Commission in 1975. All of these are “rules-based” translation systems, a computerised dictionary and grammar as it were, and they were invariably going to replace human translators “within a decade or so”, ever since the early seventies or indeed the early fifties.
Back to the present, and we’re still hearing that machine translation is the future of the profession. Not raw machine translation, because Google Translate has both made that available free of charge and shown large numbers of people how poor the output can be. But more and more language service providers are offering “machine translation + post-editing”, or MT-PE as it is known in the profession. Every so often, translators’ groups get gripped with the collective panic of “they only care about cost, they’re going to replace us with machines!”
And this is the paradox that Marta was referring to, because translators are anything but Luddites. The translation profession tends to embrace any technology that makes our lives easier and enables us to work faster. Translators are overwhelming self-employed, like myself, and so anything that allows us to work faster means more money or more free time for us, or even both! Personally, the software I have installed on my computer is worth at least twice as much as the computer hardware itself. As well as the Microsoft Office suite, I have specialist translation-memory software, optical character recognition and voice recognition as well as software for file handling, accounting and other admin tasks. The modern translator needs technology: our clients expect us to be equipped to do our jobs properly.
But what about machine translation in this technology suite? Why are so few individual translators embracing MT as a way to increase their turnover, even though larger language service providers are aggressively marketing it to their clients? There are many reasons, but I think the fundamental problem, from my experience, is that MT as it stands at the moment, on most sorts of text, does not allow the translator to work faster. I can’t process any more words of French or Spanish into high-quality English by using MT than I can using the other technologies I have.
I think that is the real paradox here. Many large language service providers are investing heavily in MT, because these companies are not run by translators, they’re run by glorified accountants who believe that they somehow must be able to cut current costs through capital investment. The MT output is cheap, very cheap if you have large volumes, but it’s not fit for purpose: if it were, the clients would buy the MT systems themselves or use Google Translate for free. So it needs to be post-edited by a human translator.
And yet good human translators find it just as fast, if not faster, to translate without using MT: they are not willing to accept the reduced rates that the LSPs have to offer because they have sold MT-PE to their clients as a cost saving.
At the moment, apart from a very few niche applications, MT-PE is only salable because the LSPs that offer it are using bad translators, at very low rates, to do the post-editing. They might as well use the same translators at the same rates to do de novo translation; their clients may well get a better product out of that deal. No translator can compete on price alone, because there will always be someone willing to offer a “translation” for cheaper. Google Translate offers a certain sort of translation for free, but I’m sure that, if my personal ethics would allow me to, I could get a desperate young translator to pay me to translate for me in return for “experience” and “exposure”: I’ve heard enough stories of that happening in other professions that I’d be amazed that no one in translation had taken the further step into immoral exploitation. Competing on price alone is a non-starter when you’re selling your own labour: who wants to try to be paid less for doing the same work? And yet competing on price alone is the core idea behind the rush of LSPs towards MT-PE.
But what of the future? If there is one thing we can be sure of from the last sixty years of experience with machine translation, it’s that it will continue to develop, maybe in fits and starts but still getting better and better. I can’t see it ever replacing human translators entirely, because I think human language, the thoughts, feelings and emotions that it encodes, is just too complicated for human intelligence to describe in a computer algorithm. But, at some point, it will become good enough to form part of certain translators’ tool-kits. At some point, it will allow some translators to work faster, and so to charge less and earn more while still delivering the same quality. I know a few translators, working in niche markets particularly suited to MT, who already manage that.
I didn’t take the MT job back in 1997, for various reasons: it seemed a huge risk, I was scared of moving so far away from my friends and family, and anyway I got another offer of a job in the south of France that I took instead! But I’m not sure what advice I’d give to a young translator today, because MT isn’t going to replace us, but it will be a part of our future. And not just “within a decade or so…” 😉