How to implement multi-transliteration efficiently

It is nothing new that we process Pali data in our project SeNeReKo. As Pali has no own writing system these texts originally have been written in devanagari, singhalese and burmese script. Nowadays we use a latin based script with diacritica, the so called Rhys-Davids transliteration. (See Wikipedia for details.) But we nowadays use other transliterations as well: Especially the Harvard Kyoto Convention is common to avoid diacritica and therefore simplifies the writing of Pali words to some extent.

The problem: Multiple Transliterations

Regarding these writing system an interesting problem arose in our project lately. As we build technical systems to process Pali words we wanted to support more than one transliteration of Pali. As it is unreasonable to store all data in various transliterations in parallel in our database, we developed a converter system that is is capable of transliterating data on the fly as needed. Nevertheless we wanted to support a variety of transliterations and to keep the transliteration system open for any kind of Pali transliteration requirement in the future: Therefore a naive approach would have been to build individual converters for each one of the transliterations to each other and maintaining them in a list, so that more converters could be added in the future. By supporting Devanagari, Harvard-Kyoto and Rhys-Davids, this would require 3! = 6 converters. While this would not be such a big deal supporting also the Velthuis transliteration scheme additionally to the already mentioned schemes it would still require us to implement 4! = 24 converters. But if we think ahead: If – at some point in the future – the need would arise for another kind of transliteration scheme already 5! = 120 converters would be required. So building individual converters for each kind of transliteration is not a good solution. Fortunately we can follow a better approach.

Think of that: If you want to convert from – let’s say – Rhys-Davids scheme to Devanagari, it is not necessary to have a single converter that can accomplish this task. Instead you could split the process into multiple, intermediate conversion steps. In that case for each new kind of transliteration scheme added now or in the future you just need to be able to convert to a scheme that is already supported. Once a word is transliterated to such a already implemented scheme it then can be transliterated to all already supported target schemes. This might reduce computational efficiency to some extent but – and that’s typically more important – it avoids excessive manual work required to implement converters for each single conversion possible.

Having this in mind we can reduce the implementation effort for converting from one scheme to another. But of course an algorithm must find a way how to transliterate from scheme A to scheme B using intermediate conversion steps. In the field of science this is nothing new: It is a graph search problem and can be solved using already existing graph search algorithms.

The solution: Path finding

Consider this: We want to convert from A to C, but we can not convert directly. Instead we can convert from A to B and from B to C. We can draw that as a graph connecting A with B and B with C. Therefor we can represent a transliteration scheme by a node in this graph. And an edge would represent a possible conversion. These edges must be directed of course, because a single conversion implementation will only have a single direction, f.e. from A to B. If we support other conversions – and we do – we will have more edges in the graph, f.e. connecting B with D or C with F or whatever. If we want to do a conversion we just need to find a way from the source transliteration scheme to a target transliteration scheme, e.g. from A (over B) to C. Or from A (over B over C) to F for instance. So each edge will be correlated to a single converter. If we start at A we will do transitions during the conversion process as defined by the graph and e.g. end up at C in the end. On request for a specific transliteration we just need to find a path through the graph. (If we can not find a path connecting two transliteration schemes no conversion is possible: We are lacking of one or more individual converters and – of course – should implement them.)

So how can we find a path through the graph? And how can we do that efficiently? If we want to convert 100.000 words it would be pretty impratical to perform path searching for each single word! This does not make sense. And how can we be modular, that it will be easy to add more transliteration schemas in the future?

Instead of searching for paths for every single conversion request we “plan ahead”: We build a list of all possible conversion chains beforehand by making use of this graph and store them in memory, so that later a simple lookup will give us such a chain for performing the required transliteration. For computers this is no big deal: Even if we would fully support five or even six conversion schemes this would result in only 120 or 720 chains. Nothing a computer can’t handle. So we end up with this initialization algorithm:

  1. Instantiate all single converters that are implemented
  2. Build a graph using these instances, where each node represents a transliteration schema and a directed edge represents a conversion between these schemas
  3. For each node
    1. Find a way to all other nodes in the graph
    2. Store this path for easy use later

An example

To give an example in our implementation we support these conversions right now (and maybe others in the future):

  • from Devanagari to Harvard-Kyoto (D -> HKC)
  • from Harvard-Kyoto to Devanagari (HKC -> D)
  • from Harvard-Kyoto to Rhys-Davids (HKC -> RD)
  • from Harvard-Kyoto to Velthuis (HKC -> V)
  • from Rhys-Davids to Harvard-Kyoto (RD -> HKC)
  • from Velthuis to Harvard-Kyoto (V -> HKC)

Therefor we arrive at this graph:

graph

You can easily see that in this particular implementation we have choosen the Harvard-Kyoto transliteration is considered as a main intermediate step. Conversions will typically transite over the “HKC” state as you can see in the chart. Therefor for our four transliteration schemes we only need to implement the minimum amount of converters: Six instead of 4! = 24. We seem to have our “mission accomplished”: Gain a maximum of benefit with a minimum of effort, while still being modular because more transliterations can easily be added in the future.

All our code is open source. If you want to have a look at it, just clone our git repositories “LibUtils_Java” and “LibPali_Java” from https://github.com/senereko.


OpenEdition schlägt Ihnen vor, diesen Beitrag wie folgt zu zitieren:
jknauth (20. August 2014). How to implement multi-transliteration efficiently. Gods, Graves and Graphs. Abgerufen am 29. April 2025 von https://doi.org/10.58079/tznd


Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert

This site uses Akismet to reduce spam. Learn how your comment data is processed.