The initial reloadTextCache() operation needs to read 1k characters, and it
could be slow on low-end devices. Also, the initial load is not blocking key
strokes, so it can take a little longer.
Bug 22062102.
Change-Id: I134424e8910c0d6131c311a862bdc87eccd3af44
1. Add mechanism to detect a slow or non-resonsive InputConnection (IC)
2. When IC slowness is detected, skip certain IC calls that are known
to be expensive (e.g., getTextAfterCursor).
3. Similarly, disables learning / unlearning on a slow IC.
4. IC slowness flag is reset when starting input on a new TextView or
when a fixed amount of time has passed.
Note: These are mostly temporary workarounds. The permanent solution is
to refactor RichInputConnection so that it is less sensitive to IC
slowness in general.
Bug: 21926256
Change-Id: I383fab0516d3f3a8e0f71e5d760a8336a7730f7c
Users rarely tap on committed words, and the cost of sending the spans back
through the input connection, back and forth to the target app, is too high.
Bug 21926256.
Change-Id: I8e55b57ce2148ed313dc927425b6d9c958634958
This is causing issues we can't deal with in a safe and timely manner.
Furthermore, users who need downloaded dictionaries already have them by now.
Bug 21797386.
Change-Id: I97e5fd84edcf2b16f04db57b7ae4a13fa9ce993f
We never delete text after the cursor, so constrain the API accordingly.
Define the number of characters to read before and after.
Set them to reasonable values.
The next CL will start caching text after the cursor.
Bug 21926256.
Change-Id: Idd58daf68614de4a69344aa3c8a4323720c5d3a0
Note: this doesn't mean that sync would happen.
It only unblocks users who are already opted into
cloud sync
Change-Id: I91836efadac89d0429d7f2e9c9190a873a638743
We want to let the facilitator decide if a word is valid or invalid, and cache
the answer in the facilitator's cache. The spell checker session doesn't need
its own word cache, except as a crutch to communicate suggestions to the code
that populates the suggestion drop-down. We leave that in place.
Bug 20018546.
Change-Id: I3c3c53e0c1d709fa2f64a2952a232acd7380b57a
Confusingly, specifying a null Locale object to the constructor
of SuggestionSpan does not necessarily mean that
SuggestionSpan#getLocale() returns null. The constructor in
question also receives Context object, and Context's locale can
be used as a fallback locale to initialize locale of
SuggestionSpan.
With this CL, LatinIME always specify non-null Locale object
when instantiating SuggestionSpan object. It basically
corresponds to the active main dictionary, but can be
Locale#ROOT when one locale is not determined for some reasons.
BUG: 20435013
Change-Id: I2c152466410327300e7dba4d7ed9a22f57c17c4f
This allows us to:
1. Rank contacts and only add the top N names to the keyboard LM.
2. Avoid adding duplicate names.
Note: The affinity calcualuation is limited by the fact that some apps
currently do not update the TIMES_CONTACTED counter. To better handle
this case, the new measure also takes into account whether or not a
name is in the visible contacts group.
Bug: 20053274
Change-Id: I2741cb8958667d4a294aba8c437a45cec4b42dc7
Does the following
1. Uses dictionaries from the files/ directory while populating the
entries into the pendingUpdates table. So that a download happens only
if the metadata.json says so.
2. Delete an unusable dictionaries from the files/ directory.
Bug: 20142708
Change-Id: Ibd738793585c39735868e324b8ad682dff0eba34
The raw strings would be sent to personal LM for decoding.
Earlier lowercased strings were being used with the purpose
of isValid checks (spelling does not consider casing for spell
checking calls). But for showing these in suggestion, we need the
raw strings.
Note: PersonalDictionaryLookup#getWordsForLocale is used to feed
the personal LM in PersonalLanguageModelHelper.
Bug:20152986
Change-Id: I9d796fa57bf2073036bf11d86b143ff205a6199c
Use LOOKBACK_CHARACTER_NUM = 80 instead of the previous
EDITOR_CONTENTS_CACHE_SIZE = 1024 (which was overkill).
This speeds up many InputLogic operations.
Bug: 19987461
Change-Id: I62b6a589f87e5daab33376b3e48f1c615a66dcfb
Currently, we read 256 (max word size) * 5 (max N-gram size + 1) characters
from the input connection when building our context. This is overkill. We
don't need more than 80 characters, regardless of which decoder we use.
Bug 19987461.
Change-Id: Ie3a321cf2482adbacd8006d9d86e6601097c15ed
The spell checker is decoding, and getting multiple sets of suggestions, for
every word it encounters. It even does that for in-vocabulary words, though
it will not underline or show suggestions for in-vocabulary words.
Bug 19987461.
Change-Id: Ie61101fa8ab8917f3f49c77768dbcffd96c1685e
Also move the class to the parent package, since it's no longer tied to the
spell checking service.
Bug 19966848.
Bug 20036810.
Change-Id: I35014d212fd87281eb90def03ee92e6872dcd63e
Autocorrection and next-word suggestion are independent,
but the settings UI creates a dependency.
Bug: 19896768.
Change-Id: Ibcdd497cdfd7b9c3a69c61e0c2d116d67df84ef8
We're waiting 10 minutes for tests to run, and half of that time is spent in
depreacted code related to migration of Delight2 dictionary files.
LatinIME will never migrate another Delight2 dictionary file again, so we can
delete this code.
Change-Id: I05c7d8429e8d9a26139456763c77997340fea8c2