Emotion Analysis#

This tutorial is available as an IPython notebook at Malaya/example/emotion.

This module trained on both standard and local (included social media) language structures, so it is save to use for both.

[1]:
%%time
import malaya
2023-09-23 18:45:27.864902: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 AVX_VNNI FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-09-23 18:45:27.931441: I tensorflow/core/util/port.cc:104] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2023-09-23 18:45:28.381244: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory
2023-09-23 18:45:28.381275: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory
2023-09-23 18:45:28.381288: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
CPU times: user 3.63 s, sys: 4.04 s, total: 7.67 s
Wall time: 2.94 s
/home/husein/dev/malaya/malaya/tokenizer.py:214: FutureWarning: Possible nested set at position 3397
  self.tok = re.compile(r'({})'.format('|'.join(pipeline)))
/home/husein/dev/malaya/malaya/tokenizer.py:214: FutureWarning: Possible nested set at position 3927
  self.tok = re.compile(r'({})'.format('|'.join(pipeline)))

labels supported#

Default labels for emotion module.

[2]:
malaya.emotion.label
[2]:
['anger', 'fear', 'happy', 'love', 'sadness', 'surprise']

Example texts#

Copy pasted from random tweets.

[3]:
anger_text = 'babi la company ni, aku dah la penat datang dari jauh'
fear_text = 'takut doh tengok cerita hantu tadi'
happy_text = 'bestnya dapat tidur harini, tak payah pergi kerja'
love_text = 'aku sayang sgt dia dah doh'
sadness_text = 'kecewa tengok kerajaan baru ni, janji ape pun tak dapat'
surprise_text = 'sakit jantung aku, terkejut dengan cerita hantu tadi'

Load multinomial model#

def multinomial(**kwargs):
    """
    Load multinomial emotion model.

    Returns
    -------
    result : malaya.model.ml.Bayes class
    """
[4]:
model = malaya.emotion.multinomial()
/home/husein/.local/lib/python3.8/site-packages/sklearn/base.py:329: UserWarning: Trying to unpickle estimator ComplementNB from version 0.22.1 when using version 1.1.2. This might lead to breaking code or invalid results. Use at your own risk. For more info please refer to:
https://scikit-learn.org/stable/model_persistence.html#security-maintainability-limitations
  warnings.warn(
/home/husein/.local/lib/python3.8/site-packages/sklearn/base.py:329: UserWarning: Trying to unpickle estimator TfidfTransformer from version 0.22.1 when using version 1.1.2. This might lead to breaking code or invalid results. Use at your own risk. For more info please refer to:
https://scikit-learn.org/stable/model_persistence.html#security-maintainability-limitations
  warnings.warn(
/home/husein/.local/lib/python3.8/site-packages/sklearn/base.py:329: UserWarning: Trying to unpickle estimator TfidfVectorizer from version 0.22.1 when using version 1.1.2. This might lead to breaking code or invalid results. Use at your own risk. For more info please refer to:
https://scikit-learn.org/stable/model_persistence.html#security-maintainability-limitations
  warnings.warn(

Predict batch of strings#

def predict(self, strings: List[str]):
    """
    classify list of strings.

    Parameters
    ----------
    strings: List[str]

    Returns
    -------
    result: List[str]
    """
[5]:
model.predict([anger_text])
/home/husein/dev/malaya/malaya/stem.py:50: FutureWarning: Possible nested set at position 3
  or re.findall(_expressions['ic'], word.lower())
[5]:
['fear']
[6]:
model.predict(
    [anger_text, fear_text, happy_text, love_text, sadness_text, surprise_text]
)
[6]:
['fear', 'fear', 'happy', 'love', 'sadness', 'surprise']

Predict batch of strings with probability#

def predict_proba(self, strings: List[str]):
    """
    classify list of strings and return probability.

    Parameters
    ----------
    strings: List[str]

    Returns
    -------
    result: List[dict[str, float]]
    """
[7]:
model.predict_proba(
    [anger_text, fear_text, happy_text, love_text, sadness_text, surprise_text]
)
[7]:
[{'anger': 0.22968519086673891,
  'fear': 0.33425478385257884,
  'happy': 0.11615463884648307,
  'love': 0.10615954967244598,
  'sadness': 0.10196790232932866,
  'surprise': 0.11177793443242351},
 {'anger': 0.11379406005377896,
  'fear': 0.4006934391283133,
  'happy': 0.11389665647702245,
  'love': 0.12481915233837086,
  'sadness': 0.0991261507380643,
  'surprise': 0.14767054126445014},
 {'anger': 0.14667998117610198,
  'fear': 0.1422732633232615,
  'happy': 0.29984520430807293,
  'love': 0.1409005078277281,
  'sadness': 0.13374705318404811,
  'surprise': 0.13655399018078768},
 {'anger': 0.1590563839629243,
  'fear': 0.14687344690114268,
  'happy': 0.1419948160674701,
  'love': 0.279550441361504,
  'sadness': 0.1285927908584157,
  'surprise': 0.14393212084854254},
 {'anger': 0.13425914937312508,
  'fear': 0.12053328146716755,
  'happy': 0.14923350911233682,
  'love': 0.10289492749919464,
  'sadness': 0.36961334597699913,
  'surprise': 0.12346578657117815},
 {'anger': 0.06724850384395685,
  'fear': 0.1283628050361525,
  'happy': 0.05801958643852813,
  'love': 0.06666524240157067,
  'sadness': 0.06537667186293224,
  'surprise': 0.6143271904168589}]

List available HuggingFace models#

[8]:
malaya.emotion.available_huggingface
[8]:
{'mesolitica/emotion-analysis-nanot5-small-malaysian-cased': {'Size (MB)': 167,
  'macro precision': 0.97336,
  'macro recall': 0.9737,
  'macro f1-score': 0.97363},
 'mesolitica/emotion-analysis-nanot5-base-malaysian-cased': {'Size (MB)': 439,
  'macro precision': 0.98003,
  'macro recall': 0.98311,
  'macro f1-score': 0.98139}}

Load HuggingFace model#

def huggingface(
    model: str = 'mesolitica/emotion-analysis-nanot5-small-malaysian-cased',
    force_check: bool = True,
    **kwargs,
):
    """
    Load HuggingFace model to classify emotion.

    Parameters
    ----------
    model: str, optional (default='mesolitica/emotion-analysis-nanot5-small-malaysian-cased')
        Check available models at `malaya.emotion.available_huggingface`.
    force_check: bool, optional (default=True)
        Force check model one of malaya model.
        Set to False if you have your own huggingface model.

    Returns
    -------
    result: malaya.torch_model.huggingface.Classification
    """
[9]:
model = malaya.emotion.huggingface()

Predict batch of strings#

def predict(self, strings: List[str]):
    """
    classify list of strings.

    Parameters
    ----------
    strings: List[str]

    Returns
    -------
    result: List[str]
    """
[10]:
model.predict(
    [anger_text, fear_text, happy_text, love_text, sadness_text, surprise_text]
)
[10]:
['anger', 'fear', 'anger', 'love', 'sadness', 'surprise']

Predict batch of strings with probability#

def predict_proba(self, strings: List[str]):
    """
    classify list of strings and return probability.

    Parameters
    ----------
    strings : List[str]

    Returns
    -------
    result: List[dict[str, float]]
    """
[11]:
model.predict_proba(
    [anger_text, fear_text, happy_text, love_text, sadness_text, surprise_text]
)
[11]:
[{'anger': 0.9920779466629028,
  'fear': 0.002742587821558118,
  'happy': 0.0007182527333498001,
  'love': 0.003472566604614258,
  'sadness': 0.0004595498612616211,
  'surprise': 0.0005290955305099487},
 {'anger': 0.0013869482791051269,
  'fear': 0.9977095127105713,
  'happy': 8.731099660508335e-05,
  'love': 0.0006927275680936873,
  'sadness': 2.510174635972362e-05,
  'surprise': 9.857082477537915e-05},
 {'anger': 0.9649528861045837,
  'fear': 0.0035354183055460453,
  'happy': 0.02452198415994644,
  'love': 0.003478029975667596,
  'sadness': 0.000459152739495039,
  'surprise': 0.003052382031455636},
 {'anger': 0.0012408840702846646,
  'fear': 0.0002690576366148889,
  'happy': 8.375391917070374e-05,
  'love': 0.9980649948120117,
  'sadness': 0.00024171061522793025,
  'surprise': 9.9410921393428e-05},
 {'anger': 0.0002834223269019276,
  'fear': 0.00013902968203183264,
  'happy': 1.7576363461557776e-05,
  'love': 0.00012455208343453705,
  'sadness': 0.9994227886199951,
  'surprise': 1.2706384040939156e-05},
 {'anger': 0.0033617503941059113,
  'fear': 0.00024840401601977646,
  'happy': 9.1005269496236e-05,
  'love': 0.0001304154866375029,
  'sadness': 0.00013015331933274865,
  'surprise': 0.9960381388664246}]

Stacking models#

More information, you can read at https://malaya.readthedocs.io/en/latest/Stack.html

[12]:
multinomial = malaya.emotion.multinomial()
[13]:
malaya.stack.predict_stack([multinomial, model], [anger_text])
[13]:
[{'anger': 0.4773527129219559,
  'fear': 0.030277435484063434,
  'happy': 0.00913391706396899,
  'love': 0.01920016354028784,
  'sadness': 0.00684538940763616,
  'surprise': 0.0076903319510817905}]
[14]:
malaya.stack.predict_stack([multinomial, model], [anger_text, sadness_text])
[14]:
[{'anger': 0.4773527129219559,
  'fear': 0.03027742134692621,
  'happy': 0.00913391706396899,
  'love': 0.01920016740231647,
  'sadness': 0.00684538940763616,
  'surprise': 0.0076903319510817905},
 {'anger': 0.0061686367698025315,
  'fear': 0.0040936174462977955,
  'happy': 0.0016195631632911615,
  'love': 0.0035799130708538637,
  'sadness': 0.6077828567403819,
  'surprise': 0.0012525194799234709}]