By Michael LoPresti, from interview with Kevin Nelson
The history of machine translation reaches far back into the past-long before preteens in the mid-1990s were using Babelfish in the middle school library to translate bad words into other languages. Some trace the history of machine translation all the way back to the French philosopher René Descartes, who, in the 17th century, proposed the creation of a universal language in which common symbols would be used to represent equivalent ideas in different languages.
It was in the 1950s, though, that computer scientists began to teach grammatical rules to computers in an effort to create artificial translation machines. In 1951, a team from IBM and Georgetown University demonstrated its machine-translation achievements in a demonstration wherein an assistant typed pithy Russian phrases into IBM cards and the so-called brain returned accurate responses such as, “We transmit thoughts by means of speech.” As it turns out, those early machine-learning pioneers who were attempting to teach grammar rules to computers had their process a little bit backward. But we’ll get back to that in a moment.
To continue reading click here.