Spelling Correction using probability LM#

This tutorial is available as an IPython notebook at Malaya/example/spelling-correction-probability-lm.

This spelling correction extends the functionality of the Peter Norvig’s spell-corrector in http://norvig.com/spell-correct.html with KenLM language model.

And improve it using some algorithms from Normalization of noisy texts in Malaysian online reviews, https://www.researchgate.net/publication/287050449_Normalization_of_noisy_texts_in_Malaysian_online_reviews

Also added custom vowels augmentation.

[1]:
import os

os.environ['CUDA_VISIBLE_DEVICES'] = ''
os.environ['TF_FORCE_GPU_ALLOW_GROWTH'] = 'true'
[2]:
import logging

logging.basicConfig(level=logging.INFO)
[3]:
import malaya
/home/husein/.local/lib/python3.8/site-packages/bitsandbytes/cextension.py:34: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable.
  warn("The installed version of bitsandbytes was compiled without GPU support. "
/home/husein/.local/lib/python3.8/site-packages/bitsandbytes/libbitsandbytes_cpu.so: undefined symbol: cadam32bit_grad_fp32
INFO:torch.distributed.nn.jit.instantiator:Created a temporary directory at /tmp/tmphlkljyxm
INFO:torch.distributed.nn.jit.instantiator:Writing /tmp/tmphlkljyxm/_remote_module_non_scriptable.py
/home/husein/dev/malaya/malaya/tokenizer.py:214: FutureWarning: Possible nested set at position 3397
  self.tok = re.compile(r'({})'.format('|'.join(pipeline)))
/home/husein/dev/malaya/malaya/tokenizer.py:214: FutureWarning: Possible nested set at position 3927
  self.tok = re.compile(r'({})'.format('|'.join(pipeline)))
[4]:
# some text examples copied from Twitter

string1 = 'krajaan patut bagi pencen awal skt kpd warga emas supaya emosi'
string2 = 'Husein ska mkn aym dkat kampng Jawa'
string3 = 'Melayu malas ni narration dia sama je macam men are trash. True to some, false to some.'
string4 = 'Tapi tak pikir ke bahaya perpetuate myths camtu. Nanti kalau ada hiring discrimination despite your good qualifications because of your race tau pulak marah. Your kids will be victims of that too.'
string5 = 'DrM cerita Melayu malas semenjak saya kat University (early 1980s) and now as i am edging towards retirement in 4-5 years time after a career of being an Engineer, Project Manager, General Manager'
string6 = 'blh bntg dlm kls nlp sy, nnti intch'
string7 = 'mulakn slh org boleh ,bila geng tuh kena slhkn jgk xboleh trima .. pelik'

Load probability model#

def load(
    language_model=None,
    sentence_piece: bool = False,
    stemmer=None,
    **kwargs,
):
    """
    Load a Probability Spell Corrector.

    Parameters
    ----------
    language_model: Callable, optional (default=None)
        If not None, must an object with `score` method.
    sentence_piece: bool, optional (default=False)
        if True, reduce possible augmentation states using sentence piece.
    stemmer: Callable, optional (default=None)
        a Callable object, must have `stem_word` method.

    Returns
    -------
    result: model
        List of model classes:

        * if passed `language_model` will return `malaya.spelling_correction.probability.ProbabilityLM`.
        * else will return `malaya.spelling_correction.probability.Probability`.
    """
[5]:
lm = malaya.language_model.kenlm()
lm
[5]:
<Model from b'model.klm'>
[6]:
model = malaya.spelling_correction.probability.load(language_model = lm)
INFO:malaya_boilerplate.huggingface:downloading frozen huseinzol05/v27-preprocessing/bm_1grams.json

List possible generated pool of words#

def edit_candidates(self, word):
    """
    Generate candidates given a word.

    Parameters
    ----------
    word: str

    Returns
    -------
    result: List[str]
    """
[7]:
model.edit_candidates('mhthir')
[7]:
['mahathir']
[8]:
model.edit_candidates('smbng')
[8]:
['sumbing',
 'sombong',
 'sembing',
 'simbang',
 'sambang',
 'sumbang',
 'sembang',
 'sambong',
 'sembung',
 'sembong',
 'sambung']

To correct a word#

def correct(
    self,
    word: str,
    string: List[str],
    index: int = -1,
    lookback: int = 3,
    lookforward: int = 3,
):
    """
    Correct a word within a text, returning the corrected word.

    Parameters
    ----------
    word: str
    string: str
        Entire string, `word` must a word inside `string`.
    index: int, optional (default=-1)
        index of word in the string, if -1, will try to use `string.index(word)`.
    lookback: int, optional (default=3)
        N left hand side words.
    lookforward: int, optional (default=3)
        N right hand side words.

    Returns
    -------
    result: str
    """
[9]:
splitted = string1.split()
model.correct('kpd', splitted)
[9]:
'kpd'
[10]:
model.correct('krajaan', splitted)
[10]:
'kerajaan'
[11]:
%%time

model.correct('skt', splitted, )
CPU times: user 6.05 ms, sys: 0 ns, total: 6.05 ms
Wall time: 5.92 ms
[11]:
'sikit'
[12]:
%%time

model.correct('skt', splitted, lookback = -1)
CPU times: user 4.25 ms, sys: 341 µs, total: 4.59 ms
Wall time: 4.43 ms
[12]:
'sikit'

To correct a sentence#

def correct_text(
    self,
    text: str,
    lookback: int = 3,
    lookforward: int = 3,
):
    """
    Correct all the words within a text, returning the corrected text.

    Parameters
    ----------
    text: str
    lookback: int, optional (default=3)
        N words on the left hand side.
        if put -1, will take all words on the left hand side.
        longer left hand side will take longer to compute.
    lookforward: int, optional (default=3)
        N words on the right hand side.
        if put -1, will take all words on the right hand side.
        longer right hand side will take longer to compute.

    Returns
    -------
    result: str
    """
[13]:
model.correct_text(string1)
[13]:
'kerajaan patut bagi pencen awal sikit kpd warga emas supaya emosi'
[14]:
tokenizer = malaya.tokenizer.Tokenizer()
[15]:
tokenized = tokenizer.tokenize(string2)
model.correct_text(' '.join(tokenized))
[15]:
'Husin ska makan ayam dekat kampung Jawa'
[16]:
tokenized = tokenizer.tokenize(string3)
model.correct_text(' '.join(tokenized))
[16]:
'Melayu malas ni narration dia sama je macam men are trash . True to some , false to some .'
[17]:
tokenized = tokenizer.tokenize(string4)
model.correct_text(' '.join(tokenized))
[17]:
'Tapi tak pikir ke bahaya perpetuate myths camtu . Nanti kalau ada hiring discrimination despite your good qualifications because of your race tau pulak marah . Your kids will be victims of that too .'
[18]:
tokenized = tokenizer.tokenize(string5)
model.correct_text(' '.join(tokenized))
[18]:
'DrM cerita Melayu malas semenjak saya kat University ( early 1980s ) and now has i am edging towards retirement ini 4 - 5 years time after a career of being ini Engineer , Project Manager , General Manager'
[19]:
tokenized = tokenizer.tokenize(string6)
model.correct_text(' '.join(tokenized))
[19]:
'blh bintang dlm kelas nlp saya , nnti intch'
[20]:
tokenized = tokenizer.tokenize(string7)
model.correct_text(' '.join(tokenized))
[20]:
'mulakan slh org boleh , bila geng tuh kena salahkan jgk xboleh trima . . pelik'
[21]:
s = 'mulakn slh org boleh ,bila geng tuh kena slhkn jgk xboleh trima .. pelik , dia slhkn org bole hri2 crta sakau then bila kna bls balik xdpt jwb ,kata mcm biasa slh (parti sampah) 🤣🤣🤣 jgn mulakn dlu slhkn org kalau xboleh trima bila kna bls balik 🤣🤣🤣'
[22]:
tokenized = tokenizer.tokenize(s)
model.correct_text(' '.join(tokenized))
[22]:
'mulakan slh org boleh , bila geng tuh kena salahkan jgk xboleh trima . . pelik , dia salahkan org bole hri2 cerita sakau then bila kena bilas balik xdpt jwb , kata mcm biasa slh ( parti sampah ) 🤣 🤣 🤣 jgn mulakan dlu salahkan org kalau xboleh trima bila kena bilas balik 🤣 🤣 🤣'

Load stemmer for probability model#

By default kata imbuhan captured using naive regex pattern without understand the word structure, and problem with that, there are so many rules need to hardcode, so we can use better stemmer model like malaya.stem.huggingface().

[23]:
stemmer = malaya.stem.huggingface()
INFO:malaya_boilerplate.huggingface:downloading frozen mesolitica/stem-lstm-512/model.pt
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
[24]:
model_stemmer = malaya.spelling_correction.probability.load(language_model = lm, stemmer = stemmer)
INFO:malaya_boilerplate.huggingface:downloading frozen huseinzol05/v27-preprocessing/bm_1grams.json
[25]:
tokenized = tokenizer.tokenize(string7)
model_stemmer.correct_text(' '.join(tokenized))
spaces_between_special_tokens is deprecated and will be removed in transformers v5. It was adding spaces between `added_tokens`, not special tokens, and does not exist in our fast implementation. Future tokenizers will handle the decoding process on a per-model rule.
[25]:
'mulakan slh org boleh , bila geng tuh kena salahkan jgk xboleh trima . . pelik'
[26]:
s = 'mulakn slh org boleh ,bila geng tuh kena slhkn jgk xboleh trima .. pelik , dia slhkn org bole hri2 crta sakau then bila kna bls balik xdpt jwb ,kata mcm biasa slh (parti sampah) 🤣🤣🤣 jgn mulakn dlu slhkn org kalau xboleh trima bila kna bls balik 🤣🤣🤣'
[27]:
tokenized = tokenizer.tokenize(s)
model_stemmer.correct_text(' '.join(tokenized))
[27]:
'mulakan slh org boleh , bila geng tuh kena salahkan jgk xboleh trima . . pelik , dia salahkan org bole hri2 cerita sakau then bila kena bilas balik xdpt jwb , kata mcm biasa slh ( parti sampah ) 🤣 🤣 🤣 jgn mulakan dlu salahkan org kalau xboleh trima bila kena bilas balik 🤣 🤣 🤣'
[ ]: