Spelling Correction using encoder Transformer#

This tutorial is available as an IPython notebook at Malaya/example/spelling-correction-encoder-transformer.

[1]:
import logging

logging.basicConfig(level=logging.INFO)
[2]:
import malaya
INFO:numexpr.utils:NumExpr defaulting to 8 threads.
[3]:
# some text examples copied from Twitter

string1 = 'krajaan patut bagi pencen awal skt kpd warga emas supaya emosi'
string2 = 'Husein ska mkn aym dkat kampng Jawa'
string3 = 'Melayu malas ni narration dia sama je macam men are trash. True to some, false to some.'
string4 = 'Tapi tak pikir ke bahaya perpetuate myths camtu. Nanti kalau ada hiring discrimination despite your good qualifications because of your race tau pulak marah. Your kids will be victims of that too.'
string5 = 'DrM cerita Melayu malas semenjak saya kat University (early 1980s) and now as i am edging towards retirement in 4-5 years time after a career of being an Engineer, Project Manager, General Manager'
string6 = 'blh bntg dlm kls nlp sy, nnti intch'
string7 = 'mulakn slh org boleh ,bila geng tuh kena slhkn jgk xboleh trima .. pelik'

Load Encoder transformer speller#

This spelling correction is a transformer based, improvement version of malaya.spelling_correction.probability.Probability. Problem with malaya.spelling_correction.probability.Probability, it naively picked highest probability of word based on public sentences (wiki, news and social media) without understand actual context, example,

string = 'krajaan patut bagi pencen awal skt kpd warga emas supaya emosi'
prob_corrector = malaya.spelling_correction.probability.load()
prob_corrector.correct_text(string)
-> 'kerajaan patut bagi pencen awal sakit kepada warga emas supaya emosi'

It supposely replaced skt with sikit, a common word people use in social media to give a little bit of attention to pencen. So, to fix that, we can use Transformer model!

Right now transformer speller supported ``BERT``, ``ALBERT`` and ``ELECTRA`` only.

def encoder(model, sentence_piece: bool = False, **kwargs):
    """
    Load a Transformer Encoder Spell Corrector. Right now only supported BERT, ALBERT and ELECTRA.

    Parameters
    ----------
    sentence_piece: bool, optional (default=False)
        if True, reduce possible augmentation states using sentence piece.

    Returns
    -------
    result: malaya.spelling_correction.transformer.Transformer class
    """
[4]:
model = malaya.transformer.load(model = 'electra')
INFO:malaya_boilerplate.huggingface:downloading frozen huseinzol05/v34-pretrained-model/electra-base.tar.gz
WARNING:tensorflow:From /Users/huseinzolkepli/Documents/malaya/malaya/transformers/electra/modeling.py:242: dense (from tensorflow.python.layers.core) is deprecated and will be removed in a future version.
Instructions for updating:
Use keras.layers.Dense instead.
WARNING:tensorflow:From /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow_core/python/layers/core.py:187: Layer.apply (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version.
Instructions for updating:
Please use `layer.__call__` method instead.
WARNING:tensorflow:From /Users/huseinzolkepli/Documents/malaya/malaya/transformers/sampling.py:26: where (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.where in 2.0, which has the same broadcast rule as np.where
WARNING:tensorflow:From /Users/huseinzolkepli/Documents/malaya/malaya/transformers/electra/__init__.py:120: multinomial (from tensorflow.python.ops.random_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.random.categorical` instead.
INFO:tensorflow:Restoring parameters from /Users/huseinzolkepli/Malaya/electra-model/base/electra-base/model.ckpt
[5]:
transformer_corrector = malaya.spelling_correction.transformer.encoder(model, sentence_piece = True)
INFO:malaya_boilerplate.huggingface:downloading frozen huseinzol05/v27-preprocessing/sp10m.cased.v4.vocab
INFO:malaya_boilerplate.huggingface:downloading frozen huseinzol05/v27-preprocessing/sp10m.cased.v4.model
INFO:malaya_boilerplate.huggingface:downloading frozen huseinzol05/v27-preprocessing/bm_1grams.json

To correct a word#

def correct(
    self,
    word: str,
    string: List[str],
    index: int = -1,
    lookback: int = 5,
    lookforward: int = 5,
    batch_size: int = 20,
):
    """
    Correct a word within a text, returning the corrected word.

    Parameters
    ----------
    word: str
    string: List[str]
        Tokenized string, `word` must a word inside `string`.
    index: int, optional (default=-1)
        index of word in the string, if -1, will try to use `string.index(word)`.
    lookback: int, optional (default=5)
        N words on the left hand side.
        if put -1, will take all words on the left hand side.
        longer left hand side will take longer to compute.
    lookforward: int, optional (default=5)
        N words on the right hand side.
        if put -1, will take all words on the right hand side.
        longer right hand side will take longer to compute.
    batch_size: int, optional (default=20)
        batch size to insert into model.

    Returns
    -------
    result: str
    """
[6]:
splitted = string1.split()
transformer_corrector.correct('kpd', splitted)
[6]:
'kepada'
[7]:
transformer_corrector.correct('krajaan', splitted)
[7]:
'kerajaan'
[8]:
%%time

transformer_corrector.correct('skt', splitted)
CPU times: user 19.2 s, sys: 837 ms, total: 20 s
Wall time: 3.59 s
[8]:
'sikit'
[9]:
%%time

transformer_corrector.correct('skt', splitted, lookback = -1)
CPU times: user 18.8 s, sys: 917 ms, total: 19.7 s
Wall time: 3.78 s
[9]:
'sikit'
[10]:
%%time

transformer_corrector.correct('skt', splitted, lookback = 2)
CPU times: user 12.6 s, sys: 588 ms, total: 13.2 s
Wall time: 2.37 s
[10]:
'sikit'

To correct a sentence#

def correct_text(
    self,
    text: str,
    lookback: int = 5,
    lookforward: int = 5,
    batch_size: int = 20
):
    """
    Correct all the words within a text, returning the corrected text.

    Parameters
    ----------
    text: str
    lookback: int, optional (default=5)
        N words on the left hand side.
        if put -1, will take all words on the left hand side.
        longer left hand side will take longer to compute.
    lookforward: int, optional (default=5)
        N words on the right hand side.
        if put -1, will take all words on the right hand side.
        longer right hand side will take longer to compute.
    batch_size: int, optional(default=20)
        batch size to insert into model.

    Returns
    -------
    result: str
    """
[11]:
transformer_corrector.correct_text(string1)
[11]:
'kerajaan patut bagi pencen awal sikit kepada warga emas supaya emosi'
[12]:
tokenizer = malaya.tokenizer.Tokenizer()
[18]:
string2
[18]:
'Husein ska mkn aym dkat kampng Jawa'
[16]:
tokenized = tokenizer.tokenize(string2)
transformer_corrector.correct_text(' '.join(tokenized))
[16]:
'Husein suka mkn ayam dikota kampung Jawa'
[17]:
tokenized = tokenizer.tokenize(string3)
transformer_corrector.correct_text(' '.join(tokenized))
[17]:
'Melayu malas ini narration dia sama sahaja macam men are trash . True to some , false to some .'
[13]:
tokenized = tokenizer.tokenize(string5)
transformer_corrector.correct_text(' '.join(tokenized))
[13]:
'DrM cerita Melayu malas semenjak saya kat University ( early 1980s ) and now as i am edging towards retirement in 4 - 5 years time after a career of being an Engineer , Project Manager , General Manager'
[14]:
tokenized = tokenizer.tokenize(string6)
transformer_corrector.correct_text(' '.join(tokenized))
[14]:
'boleh buntong dalam kelas nlp saye , nanti intch'
[15]:
tokenized = tokenizer.tokenize(string7)
transformer_corrector.correct_text(' '.join(tokenized))
[15]:
'mulakan salah orang boleh , bila geng itu kena salahkan juga xboleh terima . . pelik'