This tutorial is available as an IPython notebook at Malaya/example/gpt2-lm.

import os

os.environ['CUDA_VISIBLE_DEVICES'] = ''
import malaya
/home/husein/dev/malaya/malaya/ FutureWarning: Possible nested set at position 3361
  self.tok = re.compile(r'({})'.format('|'.join(pipeline)))
/home/husein/dev/malaya/malaya/ FutureWarning: Possible nested set at position 3879
  self.tok = re.compile(r'({})'.format('|'.join(pipeline)))


Make sure you already installed,

pip3 install transformers

List available GPT2 models#

Size (MB)
mesolitica/gpt2-117m-bahasa-cased 454

Load GPT2 LM model#

def gpt2(model: str = 'mesolitica/gpt2-117m-bahasa-cased', force_check: bool = True, **kwargs):
    Load GPT2 language model.

    model: str, optional (default='mesolitica/gpt2-117m-bahasa-cased')
        Check available models at `malaya.language_model.available_gpt2()`.
    force_check: bool, optional (default=True)
        Force check model one of malaya model.
        Set to False if you have your own huggingface model.

    result: malaya.torch_model.gpt2_lm.LM class

If you have other models from huggingface and want to load it on malaya.torch_model.gpt2_lm.LM, set force_check=False.

model = malaya.language_model.gpt2()
model.score('saya suke awak')
model.score('saya suka awak')
model.score('najib razak')
model.score('najib comel')