Gaia/System/Keyboard/IME/Latin/Dictionary Blob

From MozillaWiki
Jump to: navigation, search

See Also: Gaia/System/Keyboard/IME/Latin/Prediction_&_Auto_Correction

We store the dictionary data structures in a binary blob for better space utilization and computation efficiency, than a plain-text & human-readable word list.

We serialize a dictionary Ternary Search Tree, along with supporting metadata and auxiliary data structures, into a compact binary blob for the prediction engine to consume. The xml2dict.js script is responsible for generating the binary blob, converting dictionary *.xml files to blob *.dict files. The format of the blob is described in this article.

Header

The file begins with the 8 ASCII characters "FxOSDICT" and four more bytes. Bytes 8, 9 and 10 are currently unused and byte 11 is a dictionary format version number, currently 1.

Byte 12 file specifies the length of the longest word in the dictionary.

Character Table

After these first 13 bytes, the file contains a character table that lists the characters used in the dictionary. This table begins with a two-byte big-endian integer that specifies the number of entries in the table. Table entries follow. Each table entry is a big-endian two-byte character code, followed by a big-endian 4-byte number that specifies the number of times the character appears in the dictionary. The entries are sorted by such number of appearances, from the most to the least.

Serialized Nodes

After the character table, serialized nodes are stored, starting at byte [15 + num_char_table_entries*6], depth-first. Each node has variable length, between 1 byte and 6 bytes long, encoded as follows.

The first byte of each node is always an 8-bit bitfield: csnfffff

The c bit specifies whether this node represents a character.

  • If c is 1, then a character code follows this first byte.
  • If c is 0, then this is a terminal node that marks the end of the word.
    • If the n bit is 0, then this node is only 1-byte long (= this very first byte).
    • If the n bit is 1, then the "next" pointer follows, as described below, and this node is 4-byte long.

The s bit specifies the size of the character associated with this node.

  • If s is 0, the character is one byte long.
  • If s is 1, the character is a big-endian two byte value.

The n bit specifies whether this node includes a next pointer.

  • If n is 1, the character code is followed by a big-endian 3-byte number.

The fffff bits represent a number between 0 and 31 and provide a weight for the node. This is usually based on the word frequency data from the dictionary, though this may be tuned by adjusting frequency depending on word length, for example. At any node, these frequency bits represent the weight of the highest-frequency word that we can meet when we traverse along this node's center-pointer descendant.

If the c bit is set, the following one or two bytes (depending on the s bit) of the node are the Unicode character code that is stored in the node.

If the n bit is set, the following 3 bytes are a big-endian 24-bit unsigned integer, representing the pointer to the "next" character node. The pointer contains the byte offset from the beginning of the serialized nodes, i.e. after the character table.

If the c bit was set, the node has a character, and this means that it also has a center child, as defined by TST. We serialize the tree so that in the binary blob, the center child node always follows the current node in the blob, so we don't need to store the pointer to the center child.

As it turns out, we never need left and right nodes during prediction, so they are not serialized into the blob.

TST Serialization Example

Let's serialize a TST. Suppose we have a list of word:

'zila', 'zilian', 'ziliac', 'zart', 'zalarm', 'zset'

The TST looks like this: (Note: this tree structure depends on word frequency as in Gaia/System/Keyboard/IME/Latin/Prediction_&_Auto_Correction#Example_TST).

[z]
 |
[i     ->     a     ->     s]
 |            |            |
[l]          [r  ->  l]   [e]
 |            |      |     |
[a  ->  i]   [t]    [a]   [t]
        |            |
       [a]          [r]
        |            |
       [n  ->  c]   [m]

Suppose the normalized frequency (from 0 to 31, as described above) is:

Word Normalized Frequency
zila 31
zilian 22
ziliac 7
zart 27
zalarm 17
zset 12

The TST will be serialized as followed (after the character table):

(Again, note that offset 0 denotes the beginning of the tree, and is byte [15 + 11*6] from the beginning of the whole blob. 11 because we have 11 alphabets)

Node Offset Byte(s) Description
Level 1
'z'
0 0x9f
= 0b10011111
  • This node has character 'z', a ASCII-7 one-byte
  • No "next" pointer
  • The most-frequent word that can be traversed from this node has freq = 31, of 'zila'
1 0x7a
= chr('z')
Level 2
'i'
2 0xbf
= 0b10011111
  • This node has character 'i' of the 2nd level, as the center node from 'z' above
  • There is a "next" pointer, to 'a'
  • The most-frequent word that can be traversed from this node has freq = 31, of 'zila'
3 0x69
= chr('i')
4 5 6 0x00 0x00 0x1c
= hex(28)
  • The position of the 'a' node, next-pointed from 'i'
Level 3
'l'
7 0x9f
= 0b10011111
  • This node has character 'l' of the 3rd level, as the center node from 'i' above
  • No "next" pointer
  • The most-frequent word that can be traversed from this node has freq = 31, of 'zila'
8 0x6c
= chr('l')
Level 4
'a'
9 0xbf
= 0b10011111
  • This node has character 'a' of the 4th level, as the center node from 'l' above
  • There is a "next" pointer, to 'i'
  • The most-frequent word that can be traversed from this node has freq = 31, of 'zila'
10 0x61
= chr('a')
11 12 13 0x00 0x00 0x0f
= hex(15)
  • The position of the 'i' node, next-pointed from 'a'
terminal 14 0x1f
= 0b00011111
  • This is the terminal node for the 'zila' word. Frequency of 'zila' is 31
  • No "next" pointer
Level 4
'i'
15 0x96
= 0b10010110
  • This is the 'i' node next-pointed from 'a', in the 4th level
  • No 'next' pointer
  • The most-frequent word that can be traversed from this node has freq = 22, of 'zilian'
16 0x69
= chr('i')
Level 5
'a'
17 0x96
= 0b10010110
  • This node has character 'a' of the 5th level, as the center node from 'i' above
  • No 'next' pointer
  • The most-frequent word that can be traversed from this node has freq = 22, of 'zilian'
18 0x61
= chr('a')
Level 6
'n'
19 0xb6
= 0b10110110
  • This node has character 'n' of the 6th level, as the center node from 'a' above
  • There is a "next" pointer, to 'c'
  • The most-frequent word that can be traversed from this node has freq = 22, of 'zilian'
20 0x6e
= chr('n')
21 22 23 0x00 0x00 0x19
= hex(25)
  • The position of the 'c' node, next-pointed from 'n'
terminal 24 0x16
= 0b00010110
  • This is the terminal node for the 'zilian' word. Frequency of 'zilian' is 22
  • No "next" pointer
Level 6
'c'
25 0x87
= 0b1000111
  • This is the 'c' node next-pointed from 'n', in the 6th level
  • No 'next' pointer
  • The most-frequent word that can be traversed from this node has freq = 7, of 'ziliac'
26 0x63
= chr('c')
terminal 27 0x07
= 0b00000111
  • This is the terminal node for the 'ziliac' word. Frequency of 'zilian' is 7
  • No "next" pointer
Level 2
'a'
28 0xbb
= 0b10111011
  • This is the 'a' node next-pointed from 'i', in the 2nd level
  • There is a "next" pointer, to 's'
  • The most-frequent word that can be traversed from this node has freq = 27, of 'zart'
(and we encode character 'a')
(and the position of that "next" 's' node)
(...and the rest of the TST. As we serialize the nodes depth-first, the following nodes would be 'r', 't', 'l', 'a', 'r', 'm', 's', 'e', 't')

Notes on Terminal Node

A a terminal node (with c bit = 0) can still have a next pointer.

For example, consider the word list:

Word Frequency
zilia 127
zilian 255
ziliac 31

, for which the TST tree will be:

[z]
 |
[i]
 |
[l]
 |
[i]
 |
[a]
 |
[n -> EOW -> c]

where the EOW denotes the terminal node.

We also need the terminal node to have frequency information, such that prediction engine knows the frequency of 'zilia', in the example.

Known Issues

Same XML, Different Dict Blob

  • This is tested on Ubuntu 14.04's Python 3 (required by Makefile), version 3.4.0.
  • It is possible that two runs of XML-to-dict conversion on the same wordlist XML produce different dictionary binary blobs, thus triggering git to think there is a new revision of the *.dict.

For example, the following code will result in diff resulting in "different":

# rm en.dict if make doesn't want to make a new en.dict
$ make en.dict
$ mv en.dict en.dict.old
$ make en.dict
$ diff en.dict en.dict.old
Binary files en.dict and en.dict.old differ
  • The difference does not affect any functionality at any degree.
  • We look further by tracking the difference:
$ xxd en.dict > en.dict.xxd
$ xxd en.dict.old en.dict.old.xxd
$ diff en.dict.xxd en.dict.old.xxd
27,29c27,29
< 00001a0: 0300 e000 0000 0300 2400 0000 0200 3300  ........$.....3.
< 00001b0: 0000 0200 e400 0000 0200 3200 0000 0100  ..........2.....
< 00001c0: e500 0000 0100 eb00 0000 0100 ee00 0000  ................
---
> 00001a0: 0300 e000 0000 0300 e400 0000 0200 3300  ..............3.
> 00001b0: 0000 0200 2400 0000 0200 3200 0000 0100  ....$.....2.....
> 00001c0: eb00 0000 0100 e500 0000 0100 ee00 0000  ................
  • We first determine the location of the difference is at the character table:
    • Insert print(output.tell()) in emit() function, before and after the first for-loop.
  • Now notice the actual difference: bytes from {0x1a8 to 0x1ad} and bytes from {0x1b4 to 0x1b9} are interchangeable: [2400 0000 0020] and [e400 0000 0200]. This is the same for bytes from {0x1c0 to 0x1c5} and from {0x1c6 to 0x1cb}.
  • This is because characterFrequency.items() at emit() does not necessarily produce the same ordering on each run. It doesn't matter sorted() does stable sorting.
  • Let's insert from pprint import pprint and pprint(list(characterFrequency.items())) at emit() function. Two runs and the results are:
First run Second run
[('é', 220),
('L', 2372),
('-', 277),
('j', 2151),
('M', 4099),
('å', 1),
('H', 2115),
('W', 1544),
('è', 29),
('G', 2251),
('f', 15984),
('a', 110508),
('z', 5884),
('â', 8),
('v', 12864),
('V', 924),
('Y', 325),
('à', 3),
('s', 131933),
('ü', 17),
('m', 35668),
('n', 94375),
('r', 98945),
('w', 11092),
('i', 113155),
('o', 85820),
('h', 30601),
('R', 2268),
('û', 1),
('S', 4682),
('t', 85186),
('ï', 4),
('e', 147827),
('q', 2139),
('I', 1213),
('ë', 1),
('X', 105),
('Q', 205),
('l', 71483),
('F', 1783),
('A', 3463),
('ö', 11),
('K', 1575),
('k', 13233),
('g', 34650),
('D', 2396),
('N', 1525),
('O', 1209),
('P', 2837),
('ñ', 21),
('d', 45495),
('ê', 18),
('B', 3676),
('2', 1),
('3', 2),
('Z', 314),
('u', 43362),
('É', 3),
('ç', 12),
('ó', 7),
('J', 1081),
("'", 30168),
('C', 4323),
('c', 50164),
('b', 24532),
('E', 1684),
('$', 2),
('x', 3605),
('î', 1),
('á', 10),
('p', 33944),
('ô', 6),
('U', 602),
('T', 2515),
('y', 22361),
('ä', 2)]
[('î', 1),
('ó', 7),
('Y', 325),
('å', 1),
('c', 50164),
('ä', 2),
('J', 1081),
('F', 1783),
('l', 71483),
('é', 220),
('U', 602),
('s', 131933),
('x', 3605),
('ô', 6),
('-', 277),
('ë', 1),
('j', 2151),
('ê', 18),
('y', 22361),
('i', 113155),
('K', 1575),
('r', 98945),
('D', 2396),
('ö', 11),
('O', 1209),
('k', 13233),
('t', 85186),
('B', 3676),
('p', 33944),
('h', 30601),
('$', 2),
('É', 3),
('u', 43362),
('3', 2),
('A', 3463),
('Z', 314),
('H', 2115),
('f', 15984),
('T', 2515),
('I', 1213),
('n', 94375),
('e', 147827),
('â', 8),
('b', 24532),
('Q', 205),
('ç', 12),
('G', 2251),
('W', 1544),
('g', 34650),
('è', 29),
("'", 30168),
('R', 2268),
('S', 4682),
('P', 2837),
('2', 1),
('o', 85820),
('ñ', 21),
('z', 5884),
('w', 11092),
('X', 105),
('ï', 4),
('v', 12864),
('V', 924),
('N', 1525),
('á', 10),
('û', 1),
('m', 35668),
('M', 4099),
('C', 4323),
('à', 3),
('L', 2372),
('d', 45495),
('ü', 17),
('q', 2139),
('E', 1684),
('a', 110508)]
  • Let's look at the three tuples with frequency = 2, namely ('3', 2), ('$', 2), and ('ä', 2) (at first run). However, at second run, the ordering is ('ä', 2), ('$', 2), and ('3', 2). Thus, the written files are different.
  • Let's go back to the xxd diff result above and recall that the difference is the interchanging of [2400 0000 0020] and [e400 0000 0200]. These bytes encode ('$', 2) and ('ä', 2). Also, between these addresses is [3300 0000 0020], encoding ('3', 2). So, we have a change of order of characters of the same frequency -- which does not affect any functionality, since no particular order of characters of the same frequency is expected, at the character table.

Ideas for Future Improvements

Combining Characters

  • Combining characters aren't handled very gracefully right now. These characters take separate code points, occupy separate char positions in a string, but appear as one "letter" and may be lexically one "letter" in some specific language.
  • Example: 'री', in Devanagari (alphabet used for Hindi) is composed of 'र' ('\u0930') and '"ी' ('\u0940').
    • (Alright, I know Hindi is not covered in Latin IME. But still...)
  • Currently our engine would regard two combining characters as... two independent characters (no surprise), and they take the space of two nodes.
  • However, ideally, if the two combining characters represent one single letter in a language, our engine should regard them as one character as though they're one char node.
  • How?

Emoji and Non-BMP Characters

Unfortunately, not all Unicode characters can fit in the 2-byte code space. The emoji blocks at the Unicode SMP plane are a very good example: The emoji characters occupy code points U+1F300 to U+1F6FF, and an emoji character cannot fit within two bytes.

  • If you run xml2dict.py through a word list containing an emoji, then at the line where a character code point value is serialized into a 16-bit integer: output.write(struct.pack(">H", ord(item[0]))), you get a warning.
    • And the resulting dictionary blob is, of course, not realistic.
  • If you run word_list_converter.js through a word list containing an emoji, then -- alright, JavaScript uses UCS-2 to encode unicode code points (tongue twisting sentence, I know). So those emoji characters get broken into surrogate pairs.
    • For example, this earth emoji '🌏', with code point value U+1F30F, is broken into the surrogate pair '\uD83C\uDF0F' in UCS-2.
    • This will (conveniently) allow us to squeeze an emoji character into our current dictionary blob structure without modification. However, we'll still run into the issue as in "Combining Characters": the engine needs to know that a surrogate pair, albeit taking some space of two nodes, is one single letter.