Automation and sharing in the new translation ecosystem

Technology is helping greatly to gain access to media content in original version and also translations. Humans have a limited work power, unbalance quality output, and high costs. Technology can aid in translation, production of services such as subtitling, captioning, audio description, audio subtitles, and sign language through avatars. This automation process and the new convergence distribution ecosystem have a direct implication on existing working practices empowering humans with a more creative role. The automation process requires urgently labeling and quality benchmarking systems. Automatic tools to monitor quality and to endorse labels are not yet fully developed, with unsolved legislation at European level and also at national levels. Agencies to independently monitor quality would also have to be established, and in this new scenario the translator will slightly change his working profile from first hand producer of assets, to editing and monitoring quality. This quality control will be applicable to work produced by communities, such as crowdsourcing. This new work practice will play a leading role in commercial practices once quality can be controlled and guaranteed.

Issues such as: creative common, copyright, watermarks, and distribution are at the center of the new translation ecosystem where technology, automation and sharing will have a direct impact on costs and quality.

Comments

It's really interesting that you brought up the idea of creative common, copyright, watermarks, and distribution within this post because of what happened recently with the Fine Brothers. Long story short they are a YouTube channel that reecently faced a lot of backlash because they tried to trademark a term/format that had been around a long time. But seeing how technology is constantly changing as well as translations with different countries and languages (of which was the reason of their trademark bid as they wanted to try and create something they entire world could get in on) there has been a lot of confusion surrounding it, especially in terms of what it was they were trying to accomplish, which proves that language over technology is something that should continuously be considered.

Though this does still bring up my question of how is it that with all of the advances in technology we have, how is it that some closed captioning is much weirder than the actual dialogue?

Technology works really well, but it isn't cheap. Also if you want a good quality product you need to tailor it to your language model, language field and your specifications. YouTube is a free demo, as Google Translator, so basically they are models which are getting trained with the input from users. As you mention, the number of languages and their diversity makes a "universal" free machine something like utopia. A nice budget, some determined languages, and a semantic field will offer great results.

As to the reason why close captions are so different from ... what? Are the close captions produced in real time? Because there are several modalities for close captions and they affect directly the result. If done by stenotype what you get is not close caption but a court report, almost verbatim. No annotations for sound though. If they are produced by a respeaker, you get a decalage of text and image, plus heavy edition of original dialogue. Finally if they are done not in real time and in the old fashion way, with someone typing and editing, you get really nice close captions, sometimes with colours, emoticons, etc, etc. 

Automatic tools to monitor quality and to endorse labels are not yet fully developed, with unsolved legislation at European level and also at national levels. Agencies to independently monitor quality would also have to be established, and in this new scenario the translator will slightly change his working profile from first hand producer of assets, to editing and monitoring quality.

This is such a salient point and I have been thinking a lot about this. I know that several automated translators exist, but that they look a bit like YouTube's closed captions. The cultural translation and context remain outside of these current automation systems. However, with the volunteer force that many translation and scanlation communities provide, humans could easily quality check the computers. Not to always refer back to YouTube, but it's ability to identify copyrighted (or in this instance region coded) material seems to way-too-conservatively favor the copyright holders. Will these algorithms be able to handle the cultural understandings that go into the many facets of translation. I would love to hear more about the models that are already being suggested. 

I wonder, also, if this software will align with what volunteers want. Many translation volunteers are either interacting with a source material they love or they are using these crowdsourcing communities to develop language skills. Does automation software remove what makes these communities attractive to their members?  

Thanks for your response! What a great opening to the conversation. 

Add new comment

Log in or register to add a comment.