This behaves exactly as the old makedict command. Further
changes will redirect the calls to makedict to this, so as
to consolidate similar code.
Groundwork for
Bug: 6429606
Change-Id: Ibeadbf48bec70f988a15ca36ebf5d1ce3b5b54ea
We don't merge tails anyway, and we can't do it any more
because that would break the bigram lookup algorithm.
The speedup is about 20%, and possibly double this if
there are no bigrams.
Bug: 6394357
Change-Id: I9eec11dda9000451706d280f120404a2acbea304
Rename it, rename parameters, and add a parameter that will
be necessary soon.
Also, rescale the bigram frequency as necessary.
Bug: 6313806
Change-Id: I192543cfb6ab6bccda4a1a53c8e67fbf50a257b0
This is not the Right fix ; the Right fix would be to read
the file in a buffered way. However this delivers tolerable
performance for a minimal amount of code changes.
We may want to skip submitting this patch, but keep it around
in case we need to use the functionality until we have a good
patch.
Change-Id: I1ba938f82acfd9436c3701d1078ff981afdbea60
The core reason for this is quite shrewd. When a word is a bigram
of itself, the corresponding chargroup will have a bigram referring
to itself. When computing bigram offsets, we use cached addresses of
chargroups, but we compute the size of the node as we go. Hence, a
discrepancy may happen between the base offset as seen by the bigram
(which uses the recomputed value) and the target offset (which uses
the cached value).
When this happens, the cached node address is too large. The relative
offset is negative, which is expected, since it points to this very
charnode whose start is a few bytes earlier. But since the cached
address is too large, the offset is computed as smaller than it should
be.
On the next pass, the cache has been refreshed with the newly computed
size and the seen offset is now correct (or at least, much closer to
correct). The correct value is larger than the previously computed
offset, which was too small. If it happens that it crosses the -255 or
-65335 boundary, the address will be seen as needing 1 more byte than
previously computed. If this is the only change in size of this node,
the node will be seen as having a larger size than previously, which
is unexpected. Debug code was catching this and crashing the program.
So this case is very rare, but in an even rarer occurence, it may
happen that in the same node, another chargroup happens to decrease
it size by the same amount. In this case, the node may be seen as
having not been modified. This is probably extremely rare. If on
top of this, it happens that no other node has been modified, then
the file may be seen as complete, and the discrepancy left as is
in the file, leading to a broken file. The probability that this
happens is abyssally low, but the bug exists, and the current debug
code would not have caught this.
To further catch similar bugs, this change also modifies the test
that decides if the node has changed. On grounds that all components
of a node may only decrease in size with each successive pass, it's
theoritically safe to assume that the same size means the node
contents have not changed, but in case of a bug like the bug above
where a component wrongly grows while another shrinks and both cancel
each other out, the new code will catch this. Also, this change adds
a check against the number of passses, to avoid infinite loops in
case of a bug in the computation code.
This change fixes this bug by updating the cached address of each
chargroup as we go. This eliminates the discrepancy and fixes the
bug.
Bug: 6383103
Change-Id: Ia3f450e22c87c4c193cea8ddb157aebd5f224f01
Follow up to I4d2ef504. Address a compiler warning and a small optimization as well.
bug: 6188977
bug: 6209651
Change-Id: Ibc9da51d48ebf0b8815ad0bb2f697242970ba8f7