Unsupervised Keyword Extraction#

We can use any Vectorizer Model to calculate Top N keywords.

This tutorial is available as an IPython notebook at Malaya/example/unsupervised-keyword-extraction.

[1]:
import malaya
[2]:
# https://www.bharian.com.my/berita/nasional/2020/06/698386/isu-bersatu-tun-m-6-yang-lain-saman-muhyiddin

string = """
Dalam saman itu, plaintif memohon perisytiharan, antaranya mereka adalah ahli BERSATU yang sah, masih lagi memegang jawatan dalam parti (bagi pemegang jawatan) dan layak untuk bertanding pada pemilihan parti.

Mereka memohon perisytiharan bahawa semua surat pemberhentian yang ditandatangani Muhammad Suhaimi bertarikh 28 Mei lalu dan pengesahan melalui mesyuarat Majlis Pimpinan Tertinggi (MPT) parti bertarikh 4 Jun lalu adalah tidak sah dan terbatal.

Plaintif juga memohon perisytiharan bahawa keahlian Muhyiddin, Hamzah dan Muhammad Suhaimi di dalam BERSATU adalah terlucut, berkuat kuasa pada 28 Februari 2020 dan/atau 29 Februari 2020, menurut Fasal 10.2.3 perlembagaan parti.

Yang turut dipohon, perisytiharan bahawa Seksyen 18C Akta Pertubuhan 1966 adalah tidak terpakai untuk menghalang pelupusan pertikaian berkenaan oleh mahkamah.

Perisytiharan lain ialah Fasal 10.2.6 Perlembagaan BERSATU tidak terpakai di atas hal melucutkan/ memberhentikan keahlian semua plaintif.
"""
[3]:
import re

# minimum cleaning, just simply to remove newlines.
def cleaning(string):
    string = string.replace('\n', ' ')
    string = re.sub('[^A-Za-z\-() ]+', ' ', string).strip()
    string = re.sub(r'[ ]+', ' ', string).strip()
    return string

string = cleaning(string)

Use RAKE algorithm#

Original implementation from https://github.com/aneesha/RAKE. Malaya added attention mechanism into RAKE algorithm.

def rake(
    string: str,
    model = None,
    vectorizer = None,
    top_k: int = 5,
    atleast: int = 1,
    stopwords = get_stopwords,
    **kwargs
):
    """
    Extract keywords using Rake algorithm.

    Parameters
    ----------
    string: str
    model: Object, optional (default=None)
        Transformer model or any model has `attention` method.
    vectorizer: Object, optional (default=None)
        Prefer `sklearn.feature_extraction.text.CountVectorizer` or,
        `malaya.text.vectorizer.SkipGramCountVectorizer`.
        If None, will generate ngram automatically based on `stopwords`.
    top_k: int, optional (default=5)
        return top-k results.
    ngram: tuple, optional (default=(1,1))
        n-grams size.
    atleast: int, optional (default=1)
        at least count appeared in the string to accept as candidate.
    stopwords: List[str], (default=malaya.texts.function.get_stopwords)
        A callable that returned a List[str], or a List[str], or a Tuple[str]
        For automatic Ngram generator.

    Returns
    -------
    result: Tuple[float, str]
    """

auto-ngram#

This will auto generated N-size ngram for keyword candidates.

[4]:
malaya.keyword_extraction.rake(string)
[4]:
[(0.11666666666666665, 'ditandatangani Muhammad Suhaimi bertarikh Mei'),
 (0.08888888888888888, 'mesyuarat Majlis Pimpinan Tertinggi'),
 (0.08888888888888888, 'Seksyen C Akta Pertubuhan'),
 (0.05138888888888888, 'parti bertarikh Jun'),
 (0.04999999999999999, 'keahlian Muhyiddin Hamzah')]

auto-gram with Attention#

This will use attention mechanism as the scores. I will use small-electra in this example.

[5]:
electra = malaya.transformer.load(model = 'small-electra')
WARNING:tensorflow:From /Users/huseinzolkepli/Documents/Malaya/malaya/transformers/electra/modeling.py:242: dense (from tensorflow.python.layers.core) is deprecated and will be removed in a future version.
Instructions for updating:
Use keras.layers.Dense instead.
WARNING:tensorflow:From /Users/huseinzolkepli/Documents/tf-1.15/env/lib/python3.7/site-packages/tensorflow_core/python/layers/core.py:187: Layer.apply (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version.
Instructions for updating:
Please use `layer.__call__` method instead.
WARNING:tensorflow:From /Users/huseinzolkepli/Documents/Malaya/malaya/transformers/sampling.py:26: where (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.where in 2.0, which has the same broadcast rule as np.where
WARNING:tensorflow:From /Users/huseinzolkepli/Documents/Malaya/malaya/transformers/electra/__init__.py:120: multinomial (from tensorflow.python.ops.random_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.random.categorical` instead.
INFO:tensorflow:Restoring parameters from /Users/huseinzolkepli/Malaya/electra-model/small/electra-small/model.ckpt
[6]:
malaya.keyword_extraction.rake(string, model = electra)
[6]:
[(0.21135464299906287, 'ditandatangani Muhammad Suhaimi bertarikh Mei'),
 (0.1707678937548548, 'terlucut berkuat kuasa'),
 (0.1665075410114966, 'Muhammad Suhaimi'),
 (0.16204322474881924, 'mesyuarat Majlis Pimpinan Tertinggi'),
 (0.08333932270307894, 'Seksyen C Akta Pertubuhan')]

using vectorizer#

[7]:
from malaya.text.vectorizer import SkipGramCountVectorizer

stopwords = malaya.text.function.get_stopwords()
vectorizer = SkipGramCountVectorizer(
    token_pattern = r'[\S]+',
    ngram_range = (1, 3),
    stop_words = stopwords,
    lowercase = False,
    skip = 2
)
[8]:
malaya.keyword_extraction.rake(string, vectorizer = vectorizer)
[8]:
[(0.0017052987393271276, 'parti memohon perisytiharan'),
 (0.0017036368782590756, 'memohon perisytiharan BERSATU'),
 (0.0017012023597074357, 'memohon perisytiharan sah'),
 (0.0017012023597074357, 'sah memohon perisytiharan'),
 (0.0016992809994779549, 'perisytiharan BERSATU sah')]

fixed-ngram with Attention#

[9]:
malaya.keyword_extraction.rake(string, model = electra, vectorizer = vectorizer)
[9]:
[(0.011575973734122788, 'Suhaimi terlucut kuasa'),
 (0.011181844743375292, 'Suhaimi terlucut berkuat'),
 (0.011115823052133569, 'Hamzah Suhaimi terlucut'),
 (0.011088263093292463, 'Muhammad Suhaimi terlucut'),
 (0.010932739982610082, 'Suhaimi BERSATU terlucut')]

Use Textrank algorithm#

Malaya simply use textrank algorithm.

def textrank(
    string: str,
    model = None,
    vectorizer = None,
    top_k: int = 5,
    atleast: int = 1,
    stopwords = get_stopwords,
    **kwargs
):
    """
    Extract keywords using Textrank algorithm.

    Parameters
    ----------
    string: str
    model: Object, optional (default='None')
        model has `fit_transform` or `vectorize` method.
    vectorizer: Object, optional (default=None)
        Prefer `sklearn.feature_extraction.text.CountVectorizer` or,
        `malaya.text.vectorizer.SkipGramCountVectorizer`.
        If None, will generate ngram automatically based on `stopwords`.
    top_k: int, optional (default=5)
        return top-k results.
    atleast: int, optional (default=1)
        at least count appeared in the string to accept as candidate.
    stopwords: List[str], (default=malaya.texts.function.get_stopwords)
        A callable that returned a List[str], or a List[str], or a Tuple[str]

    Returns
    -------
    result: Tuple[float, str]
    """
[10]:
from sklearn.feature_extraction.text import TfidfVectorizer
tfidf = TfidfVectorizer()

auto-ngram with TFIDF#

This will auto generated N-size ngram for keyword candidates.

[11]:
malaya.keyword_extraction.textrank(string, model = tfidf)
[11]:
[(0.00015733542072521276, 'plaintif memohon perisytiharan'),
 (0.00012558967703709954, 'Fasal perlembagaan parti'),
 (0.00011514137183023093, 'Fasal Perlembagaan BERSATU'),
 (0.00011505528232050447, 'parti'),
 (0.00010763519022276223, 'memohon perisytiharan')]

auto-ngram with Attention#

This will auto generated N-size ngram for keyword candidates.

[12]:
electra = malaya.transformer.load(model = 'small-electra')
albert = malaya.transformer.load(model = 'albert')
INFO:tensorflow:Restoring parameters from /Users/huseinzolkepli/Malaya/electra-model/small/electra-small/model.ckpt
INFO:tensorflow:Restoring parameters from /Users/huseinzolkepli/Malaya/albert-model/base/albert-base/model.ckpt
[13]:
malaya.keyword_extraction.textrank(string, model = electra)
[13]:
[(6.318265869072403e-05, 'dipohon perisytiharan'),
 (6.316746537201306e-05, 'pemegang jawatan'),
 (6.316118885596658e-05, 'parti bertarikh Jun'),
 (6.316104343935219e-05, 'Februari'),
 (6.315818745707347e-05, 'plaintif')]
[14]:
malaya.keyword_extraction.textrank(string, model = albert)
[14]:
[(7.964654245909712e-05, 'Fasal Perlembagaan BERSATU'),
 (7.746139567779304e-05, 'mesyuarat Majlis Pimpinan Tertinggi'),
 (7.522448275120805e-05, 'Muhammad Suhaimi'),
 (7.520443949997106e-05, 'pengesahan'),
 (7.519602119292121e-05, 'terbatal Plaintif')]

Or you can use any classification model to find keywords sensitive towards to specific domain.

[15]:
sentiment = malaya.sentiment.transformer(model = 'xlnet', quantized = True)
WARNING:root:Load quantized model will cause accuracy drop.
[16]:
malaya.keyword_extraction.textrank(string, model = sentiment)
[16]:
[(6.592349998684001e-05, 'pengesahan'),
 (6.522374046273496e-05, 'parti'),
 (6.519787313586387e-05, 'ditandatangani Muhammad Suhaimi bertarikh Mei'),
 (6.50355056789609e-05, 'memegang jawatan'),
 (6.48614030622403e-05, 'pemilihan parti')]

fixed-ngram with Attention#

[17]:
stopwords = malaya.text.function.get_stopwords()
vectorizer = SkipGramCountVectorizer(
    token_pattern = r'[\S]+',
    ngram_range = (1, 3),
    stop_words = stopwords,
    lowercase = False,
    skip = 2
)
[18]:
malaya.keyword_extraction.textrank(string, model = electra, vectorizer = vectorizer)
[18]:
[(5.652169440330057e-09, 'plaintif perisytiharan'),
 (5.652075728462069e-09, 'perisytiharan ahli sah'),
 (5.651996176263403e-09, 'Plaintif perisytiharan keahlian'),
 (5.651931485635611e-09, 'Perisytiharan'),
 (5.651703407437562e-09, 'plaintif memohon perisytiharan')]
[19]:
malaya.keyword_extraction.textrank(string, model = albert, vectorizer = vectorizer)
[19]:
[(7.237609487831676e-09, 'Perisytiharan Fasal Perlembagaan'),
 (7.237148398598793e-09, 'Fasal Perlembagaan melucutkan'),
 (7.234637484224076e-09, 'Pimpinan Tertinggi (MPT)'),
 (7.2318264874552195e-09, 'Majlis Pimpinan (MPT)'),
 (7.231510832160389e-09, 'Perisytiharan Fasal BERSATU')]

Use Attention mechanism#

Use attention mechanism from transformer model to get important keywords.

def attention(
    string: str,
    model,
    vectorizer = None,
    top_k: int = 5,
    atleast: int = 1,
    stopwords = get_stopwords,
    **kwargs
):
    """
    Extract keywords using Attention mechanism.

    Parameters
    ----------
    string: str
    model: Object
        Transformer model or any model has `attention` method.
    vectorizer: Object, optional (default=None)
        Prefer `sklearn.feature_extraction.text.CountVectorizer` or,
        `malaya.text.vectorizer.SkipGramCountVectorizer`.
        If None, will generate ngram automatically based on `stopwords`.
    top_k: int, optional (default=5)
        return top-k results.
    atleast: int, optional (default=1)
        at least count appeared in the string to accept as candidate.
    stopwords: List[str], (default=malaya.texts.function.get_stopwords)
        A callable that returned a List[str], or a List[str], or a Tuple[str]

    Returns
    -------
    result: Tuple[float, str]
    """

auto-ngram#

This will auto generated N-size ngram for keyword candidates.

[20]:
malaya.keyword_extraction.attention(string, model = electra)
[20]:
[(0.9452064615567696, 'menghalang pelupusan pertikaian'),
 (0.00748668920928296, 'Fasal Perlembagaan BERSATU'),
 (0.005130746086467051, 'ahli BERSATU'),
 (0.005036596770673816, 'melucutkan memberhentikan keahlian'),
 (0.004883705096775167, 'BERSATU')]
[21]:
malaya.keyword_extraction.attention(string, model = albert)
[21]:
[(0.16196376947988833, 'plaintif memohon perisytiharan'),
 (0.09294069270557498, 'memohon perisytiharan'),
 (0.06902307677431335, 'plaintif'),
 (0.05584833292678144, 'ditandatangani Muhammad Suhaimi bertarikh Mei'),
 (0.05206227265177878, 'dipohon perisytiharan')]

fixed-ngram#

[22]:
malaya.keyword_extraction.attention(string, model = electra, vectorizer = vectorizer)
[22]:
[(0.037611192232396125, 'pertikaian mahkamah Perlembagaan'),
 (0.03757121639209162, 'pertikaian mahkamah Fasal'),
 (0.037563414917813766, 'terpakai pertikaian mahkamah'),
 (0.03756289871618318, 'menghalang pertikaian mahkamah'),
 (0.037561437116523086, 'pelupusan pertikaian mahkamah')]
[23]:
malaya.keyword_extraction.attention(string, model = albert, vectorizer = vectorizer)
[23]:
[(0.0073900373302097505, 'saman plaintif memohon'),
 (0.006895211361267655, 'Dalam plaintif memohon'),
 (0.006638399608830277, 'plaintif memohon BERSATU'),
 (0.0062231449129606375, 'Dalam saman memohon'),
 (0.006196574312595335, 'plaintif memohon perisytiharan')]

Use similarity mechanism#

def similarity(
    string: str,
    model,
    vectorizer = None,
    top_k: int = 5,
    atleast: int = 1,
    stopwords = get_stopwords,
    **kwargs,
):
    """
    Extract keywords using Sentence embedding VS keyword embedding similarity.

    Parameters
    ----------
    string: str
    model: Object
        Transformer model or any model has `vectorize` method.
    vectorizer: Object, optional (default=None)
        Prefer `sklearn.feature_extraction.text.CountVectorizer` or,
        `malaya.text.vectorizer.SkipGramCountVectorizer`.
        If None, will generate ngram automatically based on `stopwords`.
    top_k: int, optional (default=5)
        return top-k results.
    atleast: int, optional (default=1)
        at least count appeared in the string to accept as candidate.
    stopwords: List[str], (default=malaya.texts.function.get_stopwords)
        A callable that returned a List[str], or a List[str], or a Tuple[str]

    Returns
    -------
    result: Tuple[float, str]
    """

It is best to use with malaya.similarity.transformer(model = 'alxlnet').

[4]:
alxlnet = malaya.similarity.transformer(model = 'alxlnet')
[5]:
malaya.keyword_extraction.similarity(string, model = alxlnet)
[5]:
[(0.817958, 'terbatal Plaintif'),
 (0.79831344, 'memohon perisytiharan'),
 (0.7925713, 'melucutkan memberhentikan keahlian'),
 (0.7921115, 'plaintif memohon perisytiharan'),
 (0.76372087, 'Seksyen C Akta Pertubuhan')]