Documents

Ijet2004 Paper

Categories
Published
of 19
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
Related Documents
Share
Description
Translation Memory Engines: A Look under the Hood and Road Test Timothy Baldwin CSLI Stanford University Stanford, CA 94305 USA tbaldwin@csli.stanford.edu Abstract In this paper, we compare the relative effects of segment order, segmentation and segment contiguity on the retrieval performance of a translation memory system. We take a selection of both bag-of-words and segment order-sensitive string comparison methods, and run each over both character- and word-segmented data, in combination wit
Transcript
  Translation Memory Engines: A Look under the Hood and Road Test   Timothy Baldwin CSLIStanford UniversityStanford, CA 94305 USA tbaldwin@csli.stanford.edu Abstract In this paper, we compare the relative effects of segment order, segmentation and segment conti-guity on the retrieval performance of a translation memory system. We take a selection of bothbag-of-words and segment order-sensitive string comparison methods, and run each over bothcharacter- and word-segmented data, in combination with a range of local segment contiguitymodels (in the form of N-grams). Over two distinct datasets, we find that indexing according tosimple character bigrams produces a retrieval accuracy superior to any of the tested word N-grammodels. Further, in their optimal configuration, bag-of-words methods are shown to be equiva-lent to segment order-sensitive methods in terms of retrieval accuracy, but much faster. We alsoprovide evidence that our findings scale over larger-sized translation memories. 1 Introduction Translation memories  (TMs) are a list of   translation records  (source language strings paired witha unique target language translation), which the TM system accesses in suggesting a list of targetlanguage (L2)  translation candidates  for a given source language (L1) input (Trujillo 1999; Planas1998). For example, if a translator is attempting to translate a given Japanese document into English(i.e. L1 = Japanese, L2 = English), the TM system will take each string in the srcinal Japanesedocument and attempt to locate similar Japanese string(s) in a database of previous Japanese–Englishtranslation data (the TM). In the case that some set of suitably similar strings is located in the TM, thetranslations for each such string is returned to the translator. Translation retrieval  (TR) is a description of the process of selecting from the TM a set of translation records (TRecs) of maximum L1 similarity to a given input. Traditionally, the TM systemselects an arbitrary number of translation candidates falling within a preset corridor of similarity withthe input string, and simply outputs these for manual manipulation by the user in fashioning the finaltranslation.The process of TR and intrinsic utility of TMs is based on three basic assumptions: (1) L1 doc-uments are to some degree repetitive in their string composition or that they overlap to some degreewith the translation data contained in the TM, and that the TM system is thus able to propose trans-lation candidates for some subset of the strings contained in the srcinal document; (2) a given stringwill be translated consistently irrespective of document context, such that if a good match is foundfor a given L1 string, the associated translation candidate will be a near-miss translation for the inputstring; and (3) L1 similarly is directly proportional to L2 translation similarity, such that TRecs whichare more similar to the input will have translations which correspond more closely to the translation  This paper is based on Baldwin (2001).  In Proceedings of the 15th International Japanese/English Translation Conference (IJET-15), Yokohama, Japan.  of the input. Given that these assumptions hold (as is generally the case in technical domains, forexample), TMs provide a means to recycle translation data and save time in the translation process.One key concern in TM systems is invisibility, in terms of their integration into the translationenvironment(e.g. intothetranslator’swordprocessingsoftwareof choice)andsystem responsiveness.Essentially, a TM system should take nothing away from the translator in terms of translation utility,speed or accuracy, and should operate in such a way that the translator can easily ignore the TMsystem output if they feel that the translation candidate(s) are not suitable base material in translatinga given input. For the purposes of this paper, we will ignore the integration issues, and focus insteadon the TM back-end in terms of responsiveness (i.e. speed) and accuracy.A key assumption surrounding the bulk of past TR research has been that the greater the matchstringency/linguistic awareness of the retrieval mechanism, the greater the final retrieval accuracy willbecome. Naturally, anyappreciationinretrievalcomplexitycomesataprice intermsofcomputationaloverhead, potentially impacting upon system responsiveness. We thus follow the lead of Baldwin &Tanaka (2000) and Baldwin (2001) in asking the question: what is the empirical effect on retrievalperformanceofdifferentmatchapproaches? Here, retrievalperformanceisdefined asthecombinationof retrieval speed and accuracy, with the ideal method offering fast response times at high accuracy.In this paper, we choose to focus on retrieval performance within a Japanese–English TR context.One key area of interest with Japanese is the effect that  segmentation  has on retrieval performance.As Japanese is a non-segmenting language (does not explicitly delimit words orthographically), wecan take the brute-force approach in treating each string as a sequence of characters ( character-basedindexing ), or alternatively call upon segmentation technology in partitioning each string into words( word-based indexing ). The string      [ niwakaame ] “rain shower”, e.g., would be segmentedup into      1 under character-based indexing but treated as a single segment under word-based indexing.Orthogonal to this is the question of sensitivity to  segment order  . That is, should our matchmechanism treat each string as an unorganised multiset of terms (the  bag-of-words  approach), orattempt to find the match that best preserves the srcinal segment order in the input (the  segmentorder-sensitive  approach)? In other words, should we treat   [ natu no ame ] “summer rain”and    [ ame no natu ] “a rainy summer” identically on account of them being made up of thesame segments, or is the difference in segment order relevant in the context of TR? We tackle thisissue by implementing a sample of representative bag-of-words and segment order-sensitive methodsand testing the retrieval performance of each.As a third orthogonalparameter, we consider theeffectsof  segmentcontiguity . That is, do matchesover contiguous segments provide closer overall translation correspondence than matches over non-contiguous segments? That is, should we consider   [ ame no ¯ oi huyu ] “a rainy winter”more similar to   [ ame no huyu ] “a rainy winter” than   [ ame m¯ aku no¯ oi huyu ] “a winter with many days marked as rainy”, on account of the two strings having the samebasic segment overlap in the same order, but the segments being more contiguous in the first instance?Segment contiguity is either explicitly modelled within the string match mechanism, or provided asan add-in in the form of segment N-grams (see below).The major finding of this paper is that character-based indexing is shown to be superior to word-based indexing over a series of experiments and datasets. Furthermore, the bag-of-words methodswe test are equivalent in retrieval accuracy to the more expensive segment order-sensitive methods,but superior in retrieval speed. Finally, segment contiguity models provide benefits in terms of both 1 Segment boundaries are indicated by “   ” throughout the paper.  retrieval accuracy and retrieval speed, particularly when coupled with character-based indexing. Wethusprovideclearevidencethathigh-performance TRisachievablewithnaivemethods, and moreoverthat such methods outperform more intricate, expensive methods. That is, counter to intuition, TRmethods which seemingly ignore the linguistic structure of strings are superior to methods whichattempt to (at least superficially) model this linguistic structure.Below, wereviewtheorthogonal parametersof segmentation, segmentorderand segmentcontigu-ity (    2). We then present a range of both bag-of-words and segment order-sensitive string comparisonmethods (    3) and detail the evaluation methodology (    4). Finally, we evaluate the different methodsin a Japanese–English TR context (    5), before concluding the paper (    6). 2 Basic Parameters In this section, we review three parameter types that we suggest impinge on TR performance, namelysegmentation, segment order, and segment contiguity. 2.1 Segmentation Despite non-segmenting languages such as Japanese not making use of explicit segment delimiterssuch as whitespace, it is possible to artificially partition off a given string into constituent morphemesthrough a process known as  segmentation .Lookingat pastresearchonstringcomparison methodsforTMsystems, almost allsystemsinvolv-ing Japanese as the source language rely on segmentation (Nakamura 1989; Sumita & Tsutsumi 1991;Kitamura & Yamamoto 1996; Tanaka 1997), with Sato (1992) and Sato & Kawase (1994) providingrare instances of character-based systems. This is despite Fujii & Croft (1993) providing evidencefrom Japanese information retrieval that character-based indexing performs comparably to word-based indexing. In analogous research, Baldwin & Tanaka (2000) compared character- and word-based indexing within a Japanese–English TR context and found character-based indexing to hold aslight empirical advantage.The most obvious advantage of character-based indexing over word-based indexing is that there isno pre-processing overhead. Other arguments for character-based indexing over word-based indexingare that we: (a) avoid the need to commit ourselves to a particular analysis type in the case of ambi-guity or unknown words (e.g. does   correspond to     [ T ¯ oky¯ o to ] “Tokyo prefecture” or    [ higashi Ky¯ oto ] “east Kyoto”?); (b) avoid the need for stemming/lemmatisation (e.g. recog-nising that        [ otabeninaru ] “eat (subject honorific)” and   [ tabemasu ] “eat” bothcorrespond to the verb lemma   ); and (c) to a large extent get around problems related to thenormalisation of lexical alternation (e.g. differences in vowel length, such as between    [ konpy¯ uta ] “computer” and      [ konpy¯ ut ¯ a ] “computer”).Note that all methods described below are applicable to both word- and character-based index-ing. To avoid confusion between the two lexeme types, we will collectively refer to the elements of indexing as  segments . 2.2 Segment Order Our expectation is that TRecs that preserve the segment order observed in the input string will providecloser-matching translations than TRecs containing those same segments in a different order.As far as we are aware, there is no TM system operating from Japanese that does not rely onword/segment/character order to some degree. Tanaka (1997) uses pivotal content words identified  by the user to search through the TM and locate TRecs which contain those same content words inthe same order and preferably the same segment distance apart. Nakamura (1989) similarly givespreference to TRecs in which the content words contained in the srcinal input occur in the samelinear order, although there is the scope to back off to TRecs which do not preserve the srcinalword order. Sumita & Tsutsumi (1991) take the opposite tack in iteratively filtering out NPs andadverbs to leave only functional words and matrix-level predicates, and find TRecs which containthose same key words in the same ordering, preferably with the same segment types between them inthesame numbers. Sato & Kawase(1994) employa more local model of   character   order in modellingsimilarity according to N-grams fashioned from the srcinal string. 2.3 Segment contiguity The intuition that contiguous segmentmatches indicate higher string similar than non-contiguous seg-ment matches is captured either by embedding some contiguity weighting facility within the stringmatch mechanism (such as weighted sequential correspondence — see below), or providing an inde-pendent model of segment contiguity in the form of segment N-grams. By N-gram, we simply meana string of N contiguous segments. E.g.,   [ natu no ame ] “summer rain” contains a total of two 2-grams (   and   ), which are formed by taking all contiguous pairings of segments in thestring.The particular N-gram orders we test are simple unigrams (1-grams), pure bigrams (2-grams),and mixed unigrams/bigrams. These N-gram models are implemented as a pre-processing stage,following segmentation (where applicable). All this involves is mutating the srcinal strings into N-grams of the desired order, while preserving the srcinal segment order and segmentation schema.From the Japanese string   natu   no   ame  “summer rain”, for example, we would generate thefollowing variants (common to both character- and word-based indexing):1-gram:  2-gram:  Mixed 1/2-gram:  3 String Comparison Methods As the starting point for evaluation of the three parameter types targeted in this research, we take twobag-of-words (segment order-oblivious) and three segment order-sensitive methods, thereby mod-elling the effects of segment order (un)awareness. We then run each method over both segmented andunsegmented data in combination with the various N-gram models proposed above, to capture the fullrange of parameter settings.The particular bag-of-word approaches we target are the vector space model (Manning & Sch¨utze1999:p300) and “token intersection”. For segment order-sensitive approaches, we test 3-operationedit distance and similarity, and also “weighted sequential correspondence”.We will illustrate each string comparison method by way of the following mock-up TM:(1a)   [ natu no ame ] “summer rain”(1b)   [ ame no natu ] “a rainy summer”(1c)   [ ame no huyu ] “a rainy winter”(1d)     [ ma huyu no ame ] “mid-winter rain”(1e)      [ niwakaame ] “rain shower”
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks