Transformer#

This tutorial is available as an IPython notebook at Malaya/example/topic-modeling-transformer.

[1]:
import pandas as pd
import malaya
[2]:
df = pd.read_csv('tests/02032018.csv',sep=';')
df = df.iloc[3:,1:]
df.columns = ['text','label']
corpus = df.text.tolist()

You can get this file https://github.com/huseinzol05/malaya/blob/master/tests/02032018.csv . This csv already stemmed.

Load vectorizer object#

You can use TfidfVectorizer, CountVectorizer, or any vectorizer as long fit_transform method exists.

[4]:
from malaya.text.vectorizer import SkipGramCountVectorizer

stopwords = malaya.text.function.get_stopwords()
vectorizer = SkipGramCountVectorizer(
    max_df = 0.95,
    min_df = 1,
    ngram_range = (1, 3),
    stop_words = stopwords,
    skip = 2
)

Load Transformer#

We can use Transformer model to build topic modeling for corpus we have, the power of attention!

def attention(
    corpus: List[str],
    n_topics: int,
    vectorizer,
    cleaning=simple_textcleaning,
    stopwords=get_stopwords,
    ngram: Tuple[int, int] = (1, 3),
    batch_size: int = 10,
):
    """
    Use attention from malaya.transformer model to do topic modelling based on corpus given.

    Parameters
    ----------
    corpus: list
    n_topics: int, (default=10)
        size of decomposition column.
    vectorizer: object
    cleaning: function, (default=malaya.text.function.simple_textcleaning)
        function to clean the corpus.
    stopwords: List[str], (default=malaya.texts.function.get_stopwords)
        A callable that returned a List[str], or a List[str], or a Tuple[str]
    ngram: tuple, (default=(1,3))
        n-grams size to train a corpus.
    batch_size: int, (default=10)
        size of strings for each vectorization and attention.

    Returns
    -------
    result: malaya.topic_model.transformer.AttentionTopic class
    """
[5]:
malaya.transformer.available_huggingface()
[5]:
Size (MB)
mesolitica/roberta-base-bahasa-cased 443.0
mesolitica/roberta-tiny-bahasa-cased 66.1
mesolitica/bert-base-standard-bahasa-cased 443.0
mesolitica/bert-tiny-standard-bahasa-cased 66.1
mesolitica/roberta-base-standard-bahasa-cased 443.0
mesolitica/roberta-tiny-standard-bahasa-cased 66.1
mesolitica/electra-base-generator-bahasa-cased 140.0
mesolitica/electra-small-generator-bahasa-cased 19.3
mesolitica/finetune-mnli-t5-super-tiny-standard-bahasa-cased 50.7
mesolitica/finetune-mnli-t5-tiny-standard-bahasa-cased 139.0
mesolitica/finetune-mnli-t5-small-standard-bahasa-cased 242.0
mesolitica/finetune-mnli-t5-base-standard-bahasa-cased 892.0
[6]:
electra = malaya.transformer.huggingface(model = 'mesolitica/electra-base-generator-bahasa-cased')
[8]:
attention = malaya.topic_model.transformer.attention(corpus, n_topics = 10, vectorizer = electra)
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
        - Avoid using `tokenizers` before the fork if possible
        - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)

Get topics#

def top_topics(
    self, len_topic: int, top_n: int = 10, return_df: bool = True
):
    """
    Print important topics based on decomposition.

    Parameters
    ----------
    len_topic: int
        size of topics.
    top_n: int, optional (default=10)
        top n of each topic.
    return_df: bool, optional (default=True)
        return as pandas.DataFrame, else JSON.
    """
[9]:
attention.top_topics(5, top_n = 10, return_df = True)
[9]:
topic 0 topic 1 topic 2 topic 3 topic 4
0 pertumbuhan kenyataan malaysia malaysia kerajaan
1 hutang kwsp negara berita menteri
2 pendapatan kerajaan kerajaan rakyat penjelasan
3 harga dana pengalaman memalukan laporan
4 malaysia menulis berkongsi kapal malaysia
5 projek dakwaan perancangan wang perdana
6 kaya mahkamah berkongsi pengalaman berita palsu pemilihan
7 peningkatan tindakan kementerian palsu ros
8 kenaikan pertimbangan impian negara tn
9 penerima menulis kenyataan pendekatan buku isu

Get topics as string#

def get_topics(self, len_topic: int):
    """
    Return important topics based on decomposition.

    Parameters
    ----------
    len_topic: int
        size of topics.

    Returns
    -------
    result: List[str]
    """
[10]:
attention.get_topics(10)
[10]:
[(0,
  'pertumbuhan hutang pendapatan harga malaysia projek kaya peningkatan kenaikan penerima'),
 (1,
  'kenyataan kwsp kerajaan dana menulis dakwaan mahkamah tindakan pertimbangan menulis kenyataan'),
 (2,
  'malaysia negara kerajaan pengalaman berkongsi perancangan berkongsi pengalaman kementerian impian pendekatan'),
 (3,
  'malaysia berita rakyat memalukan kapal wang berita palsu palsu negara buku'),
 (4,
  'kerajaan menteri penjelasan laporan malaysia perdana pemilihan ros tn isu'),
 (5,
  'memudahkan mengundi rakyat sasaran mewujudkan berkembang memudahkan rakyat nilai impak mengundi mewujudkan impak mengundi'),
 (6,
  'teknikal berkembang mdb bincang kerja duit selesaikan lancar berlaku kerajaan duit'),
 (7,
  'bayar keputusan bahasa stres kebenaran selesaikan pekan dipecat selesaikan terdekat ambil'),
 (8,
  'parti umno pas bersatu perlembagaan ros undi keputusan pendaftaran harapan'),
 (9,
  'projek rendah gembira mempercayai kebajikan berjalan menjaga kebajikan rakyat malaysia gembira projek')]
[ ]: