Intellego/Meetings/Status/2014-01-23
From MozillaWiki
https://intellego.etherpad.mozilla.org/ep/pad/view/ro.8xafZ2WPK5AlVPYWfo0Do/rev.433
Meeting Details
- Thursday, January 23, 2014; 9 AM PST / 12 PM EST / 17:00 GMT
- Vidyo Room: https://v.mozilla.com/flex.html?roomdirect.html&key=BTvYSZyJA2lW
- Telephone Conference Bridge: +1 800-707-2533 (USA/CAN toll-free); password 369; conference number 99625
- IRC Backchannel: #intellego
- Attendees: Jeff, Kensie
Contents
Talking Points
- Action item follow-up
- Beta testing in q1 for short term solution.
- Target markets are Poland, Turkey, and Vietnam.
Previous Action Items
- Determine Phase 1 milestones
- Rework research questions within the travel metaphor paradigm
- [Jeff] MT output evaluation research for Polish, Turkish, and Vietnam
- We need to find out why these languages were chosen, to help frame our longer-term goals.
- [Kensie] Spiel for Intellego
- Generic spiel that we can target towards specific groups to gain interest without having to give out specifics of projects
- https://wiki.mozilla.org/Intellego
- creating it here https://intellego.etherpad.mozilla.org/backgrounder
Action Items
- [Kensie] fleshing out and putting it onto wiki https://intellego.etherpad.mozilla.org/backgrounder
- Determine Phase 1 milestones
- [All] Rework research questions within the travel metaphor paradigm
- [Jeff] Email Bill with studies and ask for next steps.
Research
Evaluating translation output
- http://en.wikipedia.org/wiki/Evaluation_of_machine_translation
- http://en.wikipedia.org/wiki/Round-trip_translation
- http://en.wikipedia.org/wiki/Comparison_of_machine_translation_applications
- http://en.wikipedia.org/wiki/BLEU
- http://en.wikipedia.org/wiki/NIST_(metric)
- http://en.wikipedia.org/wiki/Word_error_rate
- http://en.wikipedia.org/wiki/METEOR
- http://en.wikipedia.org/wiki/LEPOR
- http://www.est-translationstudies.org/intranet/research/MT.pdf
- Describes the results of some studies performed testing the quality of Turkish MT output.
- http://delivery.acm.org/10.1145/1880000/1873930/p1326-zhao.pdf?ip=204.228.136.8&id=1873930&acc=OPEN&key=BF13D071DEA4D3F3B0AA4BA89B4BCA5B&CFID=402558063&CFTOKEN=29922176&__acm__=1390496320_c41a9c6fa52e420dbfcdeadceab68b1d
- Study performed by Baidu researchers concerning using multiple MT engines to increase translation accuracy.
- http://www.raco.cat/index.php/Tradumatica/article/view/225899/307310
- Basic overview of MT and evaluation of four MT engines (Microsoft, Systran, and Google being 3 of them). Rather than seemingly arbitrary numbers, this displays the raw output from each engine and notes accuracy errors.
- http://amta2012.cloudapp.net/AMTA2012Files/papers/Richardson.pdf
- A detailed description of how a prominent, global organization (LDS church) implemented Microsoft MT organization-wide and rolled it out for approximately 10 languages.
- http://www.itl.nist.gov/iad/mig//tests/mt/2006/doc/mt06eval_official_results.html
- US government study evaluating the raw output of 24 MT engines (Systran, Google, and Microsoft included) using the BLEU methodology.