Sentiment Analysis
Contents
Sentiment Analysis#
This tutorial is available as an IPython notebook at Malaya/example/sentiment.
This module trained on both standard and local (included social media) language structures, so it is save to use for both.
[1]:
%%time
import malaya
CPU times: user 2.8 s, sys: 3.91 s, total: 6.71 s
Wall time: 1.96 s
/home/husein/dev/malaya/malaya/tokenizer.py:214: FutureWarning: Possible nested set at position 3397
self.tok = re.compile(r'({})'.format('|'.join(pipeline)))
/home/husein/dev/malaya/malaya/tokenizer.py:214: FutureWarning: Possible nested set at position 3927
self.tok = re.compile(r'({})'.format('|'.join(pipeline)))
labels supported#
Default labels for sentiment module.
[2]:
malaya.sentiment.label
[2]:
['negative', 'neutral', 'positive']
Example texts#
Copy pasted from random tweets.
[3]:
string1 = 'Sis, students from overseas were brought back because they are not in their countries which is if something happens to them, its not the other countries’ responsibility. Student dalam malaysia ni dah dlm tggjawab kerajaan. Mana part yg tak faham?'
string2 = 'Harap kerajaan tak bukak serentak. Slowly release week by week. Focus on economy related industries dulu'
string3 = 'Idk if aku salah baca ke apa. Bayaran rm350 utk golongan umur 21 ke bawah shj ? Anyone? If 21 ke atas ok lah. If umur 21 ke bawah? Are you serious? Siapa yg lebih byk komitmen? Aku hrp aku salah baca. Aku tk jumpa artikel tu'
string4 = 'Jabatan Penjara Malaysia diperuntukkan RM20 juta laksana program pembangunan Insan kepada banduan. Majikan yang menggaji bekas banduan, bekas penagih dadah diberi potongan cukai tambahan sehingga 2025.'
string5 = 'Dua Hari Nyaris Hatrick, Murai Batu Ceriwis Siap Meraikan Even Bekasi Bersatu!'
string6 = '@MasidiM Moga kerajaan sabah, tidak ikut pkp macam kerajaan pusat. Makin lama pkp, makin ramai hilang pekerjaan. Ti https://t.co/nSIABkkEDS'
string7 = 'Hopefully esok boleh ambil gambar dengan'
Load multinomial model#
def multinomial(**kwargs):
"""
Load multinomial emotion model.
Returns
-------
result : malaya.model.ml.Bayes class
"""
[4]:
model = malaya.sentiment.multinomial()
/home/husein/.local/lib/python3.8/site-packages/sklearn/base.py:329: UserWarning: Trying to unpickle estimator ComplementNB from version 0.22.1 when using version 1.1.2. This might lead to breaking code or invalid results. Use at your own risk. For more info please refer to:
https://scikit-learn.org/stable/model_persistence.html#security-maintainability-limitations
warnings.warn(
/home/husein/.local/lib/python3.8/site-packages/sklearn/base.py:329: UserWarning: Trying to unpickle estimator TfidfTransformer from version 0.22.1 when using version 1.1.2. This might lead to breaking code or invalid results. Use at your own risk. For more info please refer to:
https://scikit-learn.org/stable/model_persistence.html#security-maintainability-limitations
warnings.warn(
/home/husein/.local/lib/python3.8/site-packages/sklearn/base.py:329: UserWarning: Trying to unpickle estimator TfidfVectorizer from version 0.22.1 when using version 1.1.2. This might lead to breaking code or invalid results. Use at your own risk. For more info please refer to:
https://scikit-learn.org/stable/model_persistence.html#security-maintainability-limitations
warnings.warn(
Predict batch of strings#
def predict(self, strings: List[str]):
"""
classify list of strings.
Parameters
----------
strings: List[str]
Returns
-------
result: List[str]
"""
[5]:
model.predict([string1, string2, string3, string4, string5, string6, string7])
/home/husein/dev/malaya/malaya/model/stem.py:28: FutureWarning: Possible nested set at position 3
or re.findall(_expressions['ic'], word.lower())
[5]:
['negative',
'negative',
'negative',
'negative',
'neutral',
'negative',
'positive']
Predict batch of strings with probability#
def predict_proba(self, strings: List[str]):
"""
classify list of strings and return probability.
Parameters
----------
strings: List[str]
Returns
-------
result: List[dict[str, float]]
"""
[6]:
model.predict_proba([string1, string2, string3, string4, string5, string6, string7])
[6]:
[{'negative': 0.5469396890722973,
'neutral': 0.13565086101545018,
'positive': 0.31740944991225484},
{'negative': 0.4268497980032366,
'neutral': 0.21946047551031087,
'positive': 0.3536897264864507},
{'negative': 0.5915531411178273,
'neutral': 0.1601334910709211,
'positive': 0.2483133678112509},
{'negative': 0.5165487021938855,
'neutral': 0.13998199029917185,
'positive': 0.34346930750694543},
{'negative': 0.23311742560677587,
'neutral': 0.4182488090323352,
'positive': 0.3486337653608891},
{'negative': 0.8494818936945382,
'neutral': 0.060109943158198856,
'positive': 0.0904081631472596},
{'negative': 0.2922247908043552,
'neutral': 0.3367232807540181,
'positive': 0.3710519284416263}]
List available HuggingFace models#
[7]:
malaya.sentiment.available_huggingface
[7]:
{'mesolitica/sentiment-analysis-nanot5-tiny-malaysian-cased': {'Size (MB)': 93,
'macro precision': 0.67768,
'macro recall': 0.68266,
'macro f1-score': 0.67997},
'mesolitica/sentiment-analysis-nanot5-small-malaysian-cased': {'Size (MB)': 167,
'macro precision': 0.67602,
'macro recall': 0.6712,
'macro f1-score': 0.67339}}
[8]:
print(malaya.sentiment.info)
Trained on https://huggingface.co/datasets/mesolitica/chatgpt-explain-sentiment
Split 90% to train, 10% to test.
Load HuggingFace model#
def huggingface(
model: str = 'mesolitica/sentiment-analysis-nanot5-small-malaysian-cased',
force_check: bool = True,
**kwargs,
):
"""
Load HuggingFace model to classify sentiment.
Parameters
----------
model: str, optional (default='mesolitica/sentiment-analysis-nanot5-small-malaysian-cased')
Check available models at `malaya.sentiment.available_huggingface`.
force_check: bool, optional (default=True)
Force check model one of malaya model.
Set to False if you have your own huggingface model.
Returns
-------
result: malaya.torch_model.huggingface.Classification
"""
[9]:
model = malaya.sentiment.huggingface()
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Predict batch of strings#
def predict(self, strings: List[str]):
"""
classify list of strings.
Parameters
----------
strings: List[str]
Returns
-------
result: List[str]
"""
[10]:
%%time
model.predict([string1, string2, string3, string4, string5, string6, string7])
CPU times: user 1.47 s, sys: 348 ms, total: 1.82 s
Wall time: 160 ms
[10]:
['negative', 'neutral', 'neutral', 'neutral', 'neutral', 'negative', 'neutral']
Predict batch of strings with probability#
def predict_proba(self, strings: List[str]):
"""
classify list of strings and return probability.
Parameters
----------
strings: List[str]
Returns
-------
result: List[dict[str, float]]
"""
[12]:
%%time
model.predict_proba([string1, string2, string3, string4, string5, string6, string7])
CPU times: user 1.06 s, sys: 0 ns, total: 1.06 s
Wall time: 93.1 ms
[12]:
[{'negative': 0.9495719075202942,
'neutral': 0.047029513865709305,
'positive': 0.0033985504414886236},
{'negative': 0.29643991589546204,
'neutral': 0.4939780533313751,
'positive': 0.20958206057548523},
{'negative': 0.2493346780538559,
'neutral': 0.7501162886619568,
'positive': 0.0005490880575962365},
{'negative': 0.0963020920753479,
'neutral': 0.6658434271812439,
'positive': 0.2378544956445694},
{'negative': 0.03835646063089371,
'neutral': 0.7759678363800049,
'positive': 0.1856757402420044},
{'negative': 0.9871785044670105,
'neutral': 0.009978721849620342,
'positive': 0.002842693356797099},
{'negative': 0.023206932470202446,
'neutral': 0.9497118592262268,
'positive': 0.02708127535879612}]