GPT2 LM
Contents
GPT2 LM#
This tutorial is available as an IPython notebook at Malaya/example/gpt2-lm.
[1]:
import os
os.environ['CUDA_VISIBLE_DEVICES'] = ''
[2]:
import malaya
/home/husein/.local/lib/python3.8/site-packages/bitsandbytes/cextension.py:34: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable.
warn("The installed version of bitsandbytes was compiled without GPU support. "
/home/husein/.local/lib/python3.8/site-packages/bitsandbytes/libbitsandbytes_cpu.so: undefined symbol: cadam32bit_grad_fp32
/home/husein/dev/malaya/malaya/tokenizer.py:214: FutureWarning: Possible nested set at position 3397
self.tok = re.compile(r'({})'.format('|'.join(pipeline)))
/home/husein/dev/malaya/malaya/tokenizer.py:214: FutureWarning: Possible nested set at position 3927
self.tok = re.compile(r'({})'.format('|'.join(pipeline)))
List available GPT2 models#
[4]:
malaya.language_model.available_gpt2
[4]:
{'mesolitica/gpt2-117m-bahasa-cased': {'Size (MB)': 454}}
Load GPT2 LM model#
def gpt2(
model: str = 'mesolitica/gpt2-117m-bahasa-cased',
force_check: bool = True,
**kwargs,
):
"""
Load GPT2 language model.
Parameters
----------
model: str, optional (default='mesolitica/gpt2-117m-bahasa-cased')
Check available models at `malaya.language_model.available_gpt2`.
force_check: bool, optional (default=True)
Force check model one of malaya model.
Set to False if you have your own huggingface model.
Returns
-------
result: malaya.torch_model.gpt2_lm.LM class
"""
[5]:
model = malaya.language_model.gpt2()
Loading the tokenizer from the `special_tokens_map.json` and the `added_tokens.json` will be removed in `transformers 5`, it is kept for forward compatibility, but it is recommended to update your `tokenizer_config.json` by uploading it again. You will see the new `added_tokens_decoder` attribute that will store the relevant information.
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 32001. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
[6]:
model.score('saya suke awak')
[6]:
-51.3840389251709
[7]:
model.score('saya suka awak')
[7]:
-46.20505905151367
[8]:
model.score('najib razak')
[8]:
-48.355825901031494
[9]:
model.score('najib comel')
[9]:
-52.79338455200195