True Case#

This tutorial is available as an IPython notebook at Malaya/example/true-case.

This module trained on both standard and local (included social media) language structures, so it is save to use for both.

[1]:
import os

os.environ['CUDA_VISIBLE_DEVICES'] = ''
[2]:
%%time

import malaya
/home/husein/.local/lib/python3.8/site-packages/bitsandbytes/cextension.py:34: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable.
  warn("The installed version of bitsandbytes was compiled without GPU support. "
/home/husein/.local/lib/python3.8/site-packages/bitsandbytes/libbitsandbytes_cpu.so: undefined symbol: cadam32bit_grad_fp32
CPU times: user 2.73 s, sys: 2.3 s, total: 5.03 s
Wall time: 2.54 s
/home/husein/dev/malaya/malaya/tokenizer.py:214: FutureWarning: Possible nested set at position 3397
  self.tok = re.compile(r'({})'.format('|'.join(pipeline)))
/home/husein/dev/malaya/malaya/tokenizer.py:214: FutureWarning: Possible nested set at position 3927
  self.tok = re.compile(r'({})'.format('|'.join(pipeline)))

Explanation#

Common third party NLP services like Google Speech to Text or PDF to Text will returned unsensitive case and no punctuations or mistake punctuations and cases. So True Case can help you.

  1. jom makan di us makanan di sana sedap -> jom makan di US, makanan di sana sedap.

  2. kuala lumpur menteri di jabatan perdana menteri datuk seri dr mujahid yusof rawa hari ini mengakhiri lawatan kerja lapan hari ke jordan turki dan bosnia herzegovina lawatan yang bertujuan mengeratkan lagi hubungan dua hala dengan ketiga tiga negara berkenaan -> KUALA LUMPUR - Menteri di Jabatan Perdana Menteri, Datuk Seri Dr Mujahid Yusof Rawa hari ini mengakhiri lawatan kerja lapan hari ke Jordan, Turki dan Bosnia Herzegovina, lawatan yang bertujuan mengeratkan lagi hubungan dua hala dengan ketiga-tiga negara berkenaan.

True case only,

  1. Solve mistake / no punctuations.

  2. Solve mistake / unsensitive case.

  3. Not correcting any grammar.

List available HuggingFace model#

[3]:
malaya.true_case.available_huggingface
[3]:
{'mesolitica/finetune-true-case-t5-super-tiny-standard-bahasa-cased': {'Size (MB)': 51,
  'WER': 0.105094863,
  'CER': 0.02163576,
  'Suggested length': 256},
 'mesolitica/finetune-true-case-t5-tiny-standard-bahasa-cased': {'Size (MB)': 139,
  'WER': 0.0967551738,
  'CER': 0.0201099683,
  'Suggested length': 256},
 'mesolitica/finetune-true-case-t5-small-standard-bahasa-cased': {'Size (MB)': 242,
  'WER': 0.081104625471,
  'CER': 0.016383823,
  'Suggested length': 256}}
[4]:
print(malaya.true_case.info)
tested on generated dataset at https://f000.backblazeb2.com/file/malay-dataset/true-case/test-set-true-case.json

Load HuggingFace model#

def huggingface(
    model: str = 'mesolitica/finetune-true-case-t5-tiny-standard-bahasa-cased',
    force_check: bool = True,
    **kwargs,
):
    """
    Load HuggingFace model to true case.

    Parameters
    ----------
    model: str, optional (default='mesolitica/finetune-true-case-t5-tiny-standard-bahasa-cased')
        Check available models at `malaya.true_case.available_huggingface`.
    force_check: bool, optional (default=True)
        Force check model one of malaya model.
        Set to False if you have your own huggingface model.

    Returns
    -------
    result: malaya.torch_model.huggingface.Generator
    """
[5]:
model = malaya.true_case.huggingface()
Loading the tokenizer from the `special_tokens_map.json` and the `added_tokens.json` will be removed in `transformers 5`,  it is kept for forward compatibility, but it is recommended to update your `tokenizer_config.json` by uploading it again. You will see the new `added_tokens_decoder` attribute that will store the relevant information.
You are using the default legacy behaviour of the <class 'transformers.models.t5.tokenization_t5.T5Tokenizer'>. If you see this, DO NOT PANIC! This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thouroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565
[6]:
string1 = 'jom makan di us makanan di sana sedap'
string2 = 'kuala lumpur menteri di jabatan perdana menteri datuk seri dr mujahid yusof rawa hari ini mengakhiri lawatan kerja lapan hari ke jordan turki dan bosnia herzegovina lawatan yang bertujuan mengeratkan lagi hubungan dua hala dengan ketiga tiga negara berkenaan'

Predict#

def generate(self, strings: List[str], **kwargs):
    """
    Generate texts from the input.

    Parameters
    ----------
    strings : List[str]
    **kwargs: vector arguments pass to huggingface `generate` method.
        Read more at https://huggingface.co/docs/transformers/main_classes/text_generation

    Returns
    -------
    result: List[str]
    """
[7]:
model.generate([string1, string2], max_length = 256)
spaces_between_special_tokens is deprecated and will be removed in transformers v5. It was adding spaces between `added_tokens`, not special tokens, and does not exist in our fast implementation. Future tokenizers will handle the decoding process on a per-model rule.
[7]:
['Jom makan di US makanan di sana sedap',
 'KUALA LUMPUR: Menteri di Jabatan Perdana Menteri, Datuk Seri Dr Mujahid Yusof Rawa hari ini mengakhiri lawatan kerja lapan hari ke Jordan Turki dan Bosnia Herzegovina, lawatan yang bertujuan mengeratkan lagi hubungan dua hala dengan ketiga-tiga negara berkenaan.']
[8]:
import random

def random_uppercase(string):
    string = [c.upper() if random.randint(0,1) else c for c in string]
    return ''.join(string)
[9]:
r = random_uppercase(string2)
r
[9]:
'KuAlA lUmPUr MeNtERI Di JabAtan PerdANA menterI DatuK Seri dR mUjaHId yUsOF rAwA HArI Ini MeNgAkHIrI LawaTAN KeRJa lAPAN HARi KE JORDAn TUrki DAn BoSNIA herZEGoVINA LaWatan yANG bErtujUAN meNgEratKAn laGI HuBUnGAN DUA HAlA DENgAN kETiGa tigA NEgAra bERKeNAAn'
[10]:
model.generate([r], max_length = 256)
[10]:
['Kuala Lumpur Menteri di Jabatan Perdana Menteri Datuk Seri Dr Mujahid Yusof Rawa hari ini mengakhiri lawatan kerja lapan hari ke Jordan Turki dan Bosnia, Herzegovina. Lawatan yang bertujuan mengeratkan lagi hubungan dua hala dengan ketiga tiga negara berkenaan.']

able to infer mixed MS and EN#

[11]:
string3 = 'i hate chicken but i like fish'
string4 = 'Tun Dr Mahathir Mohamad and Perikatan Nasional (PN) Information chief Datuk Seri Azmin Ali may have differences, but both men are on the same page one thing – the belief that Pakatan Harapan (PH) is bad news for the economy.'
string4 = random_uppercase(string4)
string4
[11]:
'TUN DR MahAtHir MOhAmad and PERIKAtaN NASiOnAl (PN) INfoRMAtion cHIef DaTuk SERi AzmiN ALi MAy haVe difFErENCes, but BoTH mEn are on THe Same paGE ONE thIng – THE beLIeF thaT PaKataN HarAPaN (PH) IS baD nEWs fOr ThE EConOMY.'
[12]:
model.generate([string3, string4], max_length = 256)
[12]:
['I hate chicken but I like fish.',
 'Tun Dr Mahathir Mohamad and Perikatan Nasional (PN) information chief Datuk Seri Azmin Ali may have differences, but both men are on the same page one thing – the belief that Pakatan Harapan (PH) is bad news for the economy.']