Transformer#

This tutorial is available as an IPython notebook at Malaya/example/topic-modeling-transformer.

[1]:
import pandas as pd
import malaya
/home/husein/dev/malaya/malaya/tokenizer.py:214: FutureWarning: Possible nested set at position 3397
  self.tok = re.compile(r'({})'.format('|'.join(pipeline)))
/home/husein/dev/malaya/malaya/tokenizer.py:214: FutureWarning: Possible nested set at position 3927
  self.tok = re.compile(r'({})'.format('|'.join(pipeline)))
[2]:
df = pd.read_csv('tests/02032018.csv',sep=';')
df = df.iloc[3:,1:]
df.columns = ['text','label']
corpus = df.text.tolist()

You can get this file https://github.com/huseinzol05/malaya/blob/master/tests/02032018.csv . This csv already stemmed.

Load vectorizer object#

You can use TfidfVectorizer, CountVectorizer, or any vectorizer as long fit_transform method exists.

[3]:
from malaya.text.vectorizer import SkipGramCountVectorizer

stopwords = malaya.text.function.get_stopwords()
vectorizer = SkipGramCountVectorizer(
    max_df = 0.95,
    min_df = 1,
    ngram_range = (1, 3),
    stop_words = stopwords,
    skip = 2
)

Load Transformer#

We can use Transformer model to build topic modeling for corpus we have, the power of attention!

def attention(
    corpus: List[str],
    n_topics: int,
    vectorizer,
    cleaning=simple_textcleaning,
    stopwords=get_stopwords,
    ngram: Tuple[int, int] = (1, 3),
    batch_size: int = 10,
):
    """
    Use attention from malaya.transformer model to do topic modelling based on corpus given.

    Parameters
    ----------
    corpus: list
    n_topics: int, (default=10)
        size of decomposition column.
    vectorizer: object
    cleaning: function, (default=malaya.text.function.simple_textcleaning)
        function to clean the corpus.
    stopwords: List[str], (default=malaya.texts.function.get_stopwords)
        A callable that returned a List[str], or a List[str], or a Tuple[str]
    ngram: tuple, (default=(1,3))
        n-grams size to train a corpus.
    batch_size: int, (default=10)
        size of strings for each vectorization and attention.

    Returns
    -------
    result: malaya.topic_model.transformer.AttentionTopic class
    """
[5]:
malaya.transformer.available_huggingface
[5]:
{'mesolitica/roberta-base-bahasa-cased': {'Size (MB)': 443},
 'mesolitica/roberta-tiny-bahasa-cased': {'Size (MB)': 66.1},
 'mesolitica/bert-base-standard-bahasa-cased': {'Size (MB)': 443},
 'mesolitica/bert-tiny-standard-bahasa-cased': {'Size (MB)': 66.1},
 'mesolitica/roberta-base-standard-bahasa-cased': {'Size (MB)': 443},
 'mesolitica/roberta-tiny-standard-bahasa-cased': {'Size (MB)': 66.1},
 'mesolitica/electra-base-generator-bahasa-cased': {'Size (MB)': 140},
 'mesolitica/electra-small-generator-bahasa-cased': {'Size (MB)': 19.3}}
[6]:
electra = malaya.transformer.huggingface(model = 'mesolitica/electra-base-generator-bahasa-cased')
Loading the tokenizer from the `special_tokens_map.json` and the `added_tokens.json` will be removed in `transformers 5`,  it is kept for forward compatibility, but it is recommended to update your `tokenizer_config.json` by uploading it again. You will see the new `added_tokens_decoder` attribute that will store the relevant information.
[7]:
attention = malaya.topic_model.transformer.attention(corpus, n_topics = 10, vectorizer = electra)
You're using a ElectraTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
        - Avoid using `tokenizers` before the fork if possible
        - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)

Get topics#

def top_topics(
    self, len_topic: int, top_n: int = 10, return_df: bool = True
):
    """
    Print important topics based on decomposition.

    Parameters
    ----------
    len_topic: int
        size of topics.
    top_n: int, optional (default=10)
        top n of each topic.
    return_df: bool, optional (default=True)
        return as pandas.DataFrame, else JSON.
    """
[8]:
attention.top_topics(5, top_n = 10, return_df = True)
[8]:
topic 0 topic 1 topic 2 topic 3 topic 4
0 pertumbuhan kenyataan malaysia malaysia kerajaan
1 hutang kwsp negara berita menteri
2 pendapatan kerajaan kerajaan rakyat penjelasan
3 harga dana pengalaman memalukan laporan
4 malaysia menulis berkongsi kapal malaysia
5 projek dakwaan perancangan wang perdana
6 kaya mahkamah berkongsi pengalaman berita palsu pemilihan
7 peningkatan tindakan kementerian palsu ros
8 kenaikan pertimbangan impian negara tn
9 penerima menulis kenyataan pendekatan buku isu

Get topics as string#

def get_topics(self, len_topic: int):
    """
    Return important topics based on decomposition.

    Parameters
    ----------
    len_topic: int
        size of topics.

    Returns
    -------
    result: List[str]
    """
[9]:
attention.get_topics(10)
[9]:
[(0,
  'pertumbuhan hutang pendapatan harga malaysia projek kaya peningkatan kenaikan penerima'),
 (1,
  'kenyataan kwsp kerajaan dana menulis dakwaan mahkamah tindakan pertimbangan menulis kenyataan'),
 (2,
  'malaysia negara kerajaan pengalaman berkongsi perancangan berkongsi pengalaman kementerian impian pendekatan'),
 (3,
  'malaysia berita rakyat memalukan kapal wang berita palsu palsu negara buku'),
 (4,
  'kerajaan menteri penjelasan laporan malaysia perdana pemilihan ros tn isu'),
 (5,
  'memudahkan mengundi rakyat sasaran mewujudkan berkembang memudahkan rakyat nilai impak mengundi mewujudkan impak mengundi'),
 (6,
  'teknikal berkembang mdb bincang kerja duit selesaikan lancar berlaku kerajaan duit'),
 (7,
  'bayar keputusan bahasa stres kebenaran selesaikan pekan dipecat selesaikan terdekat ambil'),
 (8,
  'parti umno pas bersatu perlembagaan ros undi keputusan pendaftaran harapan'),
 (9,
  'projek rendah gembira mempercayai kebajikan berjalan menjaga kebajikan rakyat malaysia gembira projek')]