Toxicity Analysis

This tutorial is available as an IPython notebook at Malaya/example/toxicity.

This module trained on both standard and local (included social media) language structures, so it is save to use for both.

[1]:
%%time
import malaya
CPU times: user 5.91 s, sys: 1.23 s, total: 7.14 s
Wall time: 8.27 s

get labels

[2]:
malaya.toxicity.label
[2]:
['severe toxic',
 'obscene',
 'identity attack',
 'insult',
 'threat',
 'asian',
 'atheist',
 'bisexual',
 'buddhist',
 'christian',
 'female',
 'heterosexual',
 'indian',
 'homosexual, gay or lesbian',
 'intellectual or learning disability',
 'male',
 'muslim',
 'other disability',
 'other gender',
 'other race or ethnicity',
 'other religion',
 'other sexual orientation',
 'physical disability',
 'psychiatric or mental illness',
 'transgender',
 'malay',
 'chinese']
[4]:
string = 'Benda yg SALAH ni, jgn lah didebatkan. Yg SALAH xkan jadi betul. Ingat tu. Mcm mana kesat sekalipun org sampaikan mesej, dan memang benda tu salah, diam je. Xyah nk tunjuk kau open sangat nk tegur cara org lain berdakwah. '
another_string = 'melayu bodoh, dah la gay, sokong lgbt lagi, memang tak guna'
string1 = 'Sis, students from overseas were brought back because they are not in their countries which is if something happens to them, its not the other countries’ responsibility. Student dalam malaysia ni dah dlm tggjawab kerajaan. Mana part yg tak faham?'
string2 = 'Harap kerajaan tak bukak serentak. Slowly release week by week. Focus on economy related industries dulu'

Load multinomial model

def multinomial(**kwargs):
    """
    Load multinomial toxicity model.

    Returns
    -------
    result : malaya.model.ml.MultilabelBayes class
    """
[9]:
model = malaya.toxicity.multinomial()

Predict batch of strings

def predict(self, strings: List[str]):
    """
    classify list of strings.

    Parameters
    ----------
    strings: List[str]

    Returns
    -------
    result: List[str]
    """
[6]:
model.predict([string])
[6]:
[['severe toxic',
  'obscene',
  'identity attack',
  'insult',
  'indian',
  'malay',
  'chinese']]

Predict batch of strings with probability

def predict_proba(self, strings: List[str]):
    """
    classify list of strings and return probability.

    Parameters
    ----------
    strings: List[str]

    Returns
    -------
    result: List[dict[str, float]]
    """
[7]:
model.predict_proba([string])
[7]:
[{'severe toxic': 0.997487040981572,
  'obscene': 0.9455379277616331,
  'identity attack': 0.8274699625500679,
  'insult': 0.5607594945618526,
  'threat': 0.024772971511820983,
  'asian': 0.0221240002096628,
  'atheist': 0.013774558637508741,
  'bisexual': 0.0024495807483865223,
  'buddhist': 0.004640372956039871,
  'christian': 0.052795457745171054,
  'female': 0.05289744129561423,
  'heterosexual': 0.008128507494633362,
  'indian': 0.9023637357823499,
  'homosexual, gay or lesbian': 0.04385664232535533,
  'intellectual or learning disability': 0.0014981591337876019,
  'male': 0.07976929455558882,
  'muslim': 0.08806420077375651,
  'other disability': 0.0,
  'other gender': 0.0,
  'other race or ethnicity': 0.0017014040578187566,
  'other religion': 0.0017333144620482767,
  'other sexual orientation': 0.00122606681013474,
  'physical disability': 0.001489522998169223,
  'psychiatric or mental illness': 0.027125947355667267,
  'transgender': 0.012349564445375391,
  'malay': 0.9991900346707605,
  'chinese': 0.9886782229459774}]

List available Transformer models

[2]:
malaya.toxicity.available_transformer()
INFO:root:tested on 20% test set.
[2]:
Size (MB) Quantized Size (MB) micro precision micro recall micro f1-score
bert 425.6 111.00 0.86098 0.77313 0.81469
tiny-bert 57.4 15.40 0.83535 0.79611 0.81526
albert 48.6 12.80 0.86054 0.76973 0.81261
tiny-albert 22.4 5.98 0.83535 0.79611 0.81526
xlnet 446.6 118.00 0.77904 0.83829 0.80758
alxlnet 46.8 13.30 0.83376 0.80221 0.81768
fastformer 446.6 118.00 0.88249 0.74826 0.80985
tiny-fastformer 77.3 19.60 0.85131 0.76620 0.80652

Load Transformer model

def transformer(model: str = 'xlnet', quantized: bool = False, **kwargs):
    """
    Load Transformer toxicity model.

    Parameters
    ----------
    model : str, optional (default='bert')
        Model architecture supported. Allowed values:

        * ``'bert'`` - Google BERT BASE parameters.
        * ``'tiny-bert'`` - Google BERT TINY parameters.
        * ``'albert'`` - Google ALBERT BASE parameters.
        * ``'tiny-albert'`` - Google ALBERT TINY parameters.
        * ``'xlnet'`` - Google XLNET BASE parameters.
        * ``'alxlnet'`` - Malaya ALXLNET BASE parameters.
        * ``'fastformer'`` - FastFormer BASE parameters.
        * ``'tiny-fastformer'`` - FastFormer TINY parameters.

    quantized : bool, optional (default=False)
        if True, will load 8-bit quantized model.
        Quantized model not necessary faster, totally depends on the machine.

    Returns
    -------
    result: model
        List of model classes:

        * if `bert` in model, will return `malaya.model.bert.SigmoidBERT`.
        * if `xlnet` in model, will return `malaya.model.xlnet.SigmoidXLNET`.
        * if `fastformer` in model, will return `malaya.model.fastformer.SigmoidFastFormer`.
    """
[16]:
model = malaya.toxicity.transformer(model = 'alxlnet')

Load Quantized model

To load 8-bit quantized model, simply pass quantized = True, default is False.

We can expect slightly accuracy drop from quantized model, and not necessary faster than normal 32-bit float model, totally depends on machine.

[ ]:
quantized_model = malaya.toxicity.transformer(model = 'alxlnet', quantized = True)
WARNING:root:Load quantized model will cause accuracy drop.

Predict batch of strings

def predict(self, strings: List[str]):
    """
    classify list of strings.

    Parameters
    ----------
    strings: List[str]

    Returns
    -------
    result: List[List[str]]
    """
[12]:
model.predict([string,another_string])
[12]:
[['obscene'],
 ['severe toxic', 'obscene', 'identity attack', 'insult', 'malay']]

Predict batch of strings with probability

def predict_proba(self, strings: List[str]):
    """
    classify list of strings and return probability.

    Parameters
    ----------
    strings : List[str]

    Returns
    -------
    result: List[dict[str, float]]
    """
[14]:
model.predict_proba([string,another_string])
[14]:
[{'severe toxic': 0.30419078,
  'obscene': 0.07300964,
  'identity attack': 0.02309686,
  'insult': 0.14792377,
  'threat': 0.0043829083,
  'asian': 0.00018724799,
  'atheist': 0.0013933778,
  'bisexual': 0.0005682409,
  'buddhist': 0.0006982982,
  'christian': 0.00010216236,
  'female': 0.0062876344,
  'heterosexual': 3.6597252e-05,
  'indian': 0.020283729,
  'homosexual, gay or lesbian': 0.0008122027,
  'intellectual or learning disability': 0.00015977025,
  'male': 0.0007993579,
  'muslim': 0.054483294,
  'other disability': 0.00017657876,
  'other gender': 0.00018069148,
  'other race or ethnicity': 6.273389e-05,
  'other religion': 0.0011053085,
  'other sexual orientation': 0.0013027787,
  'physical disability': 0.00010755658,
  'psychiatric or mental illness': 0.00078335404,
  'transgender': 0.00080055,
  'malay': 0.0033579469,
  'chinese': 0.20889702},
 {'severe toxic': 0.99571323,
  'obscene': 0.91805434,
  'identity attack': 0.95676684,
  'insult': 0.7667657,
  'threat': 0.02582252,
  'asian': 0.00074103475,
  'atheist': 0.0012175143,
  'bisexual': 0.07754475,
  'buddhist': 0.004547477,
  'christian': 0.0019699335,
  'female': 0.03404945,
  'heterosexual': 0.029964417,
  'indian': 0.021356285,
  'homosexual, gay or lesbian': 0.13626209,
  'intellectual or learning disability': 0.021410972,
  'male': 0.029543608,
  'muslim': 0.06485465,
  'other disability': 0.0006414652,
  'other gender': 0.04015115,
  'other race or ethnicity': 0.010606945,
  'other religion': 0.001650244,
  'other sexual orientation': 0.04054076,
  'physical disability': 0.0025109593,
  'psychiatric or mental illness': 0.0022883855,
  'transgender': 0.01127643,
  'malay': 0.9658916,
  'chinese': 0.33373892}]
[15]:
quantized_model.predict_proba([string,another_string])
[15]:
[{'severe toxic': 0.28386846,
  'obscene': 0.25873762,
  'identity attack': 0.021321118,
  'insult': 0.19023287,
  'threat': 0.005617261,
  'asian': 0.00022211671,
  'atheist': 0.000109523535,
  'bisexual': 0.0019034147,
  'buddhist': 0.00038090348,
  'christian': 0.0016773939,
  'female': 0.007807076,
  'heterosexual': 0.0001899302,
  'indian': 0.049388766,
  'homosexual, gay or lesbian': 0.00043603778,
  'intellectual or learning disability': 0.0012571216,
  'male': 0.0043218136,
  'muslim': 0.018054605,
  'other disability': 0.0011820793,
  'other gender': 0.00044164062,
  'other race or ethnicity': 0.00012764335,
  'other religion': 0.0009614825,
  'other sexual orientation': 0.0040558875,
  'physical disability': 0.0005840957,
  'psychiatric or mental illness': 0.0023525357,
  'transgender': 0.003135711,
  'malay': 0.0013717413,
  'chinese': 0.0051787198},
 {'severe toxic': 0.9966523,
  'obscene': 0.82459927,
  'identity attack': 0.97338796,
  'insult': 0.49216133,
  'threat': 0.010962069,
  'asian': 0.0034621954,
  'atheist': 0.0007635355,
  'bisexual': 0.044597328,
  'buddhist': 0.0061615705,
  'christian': 0.0029616058,
  'female': 0.023250878,
  'heterosexual': 0.0038115382,
  'indian': 0.0068957508,
  'homosexual, gay or lesbian': 0.084989995,
  'intellectual or learning disability': 0.006228268,
  'male': 0.070231974,
  'muslim': 0.055434316,
  'other disability': 0.00017631054,
  'other gender': 0.02043128,
  'other race or ethnicity': 0.0032926202,
  'other religion': 0.0035361946,
  'other sexual orientation': 0.018447628,
  'physical disability': 0.0007721717,
  'psychiatric or mental illness': 0.004228982,
  'transgender': 0.0046984255,
  'malay': 0.7579823,
  'chinese': 0.8585954}]

Open toxicity visualization dashboard

Default when you call predict_words it will open a browser with visualization dashboard, you can disable by visualization=False.

[13]:
model.predict_words(another_string)
[14]:
from IPython.core.display import Image, display

display(Image('toxicity-dashboard.png', width=800))
_images/load-toxic_26_0.png

Vectorize

Let say you want to visualize sentence / word level in lower dimension, you can use model.vectorize,

def vectorize(self, strings: List[str], method: str = 'first'):
    """
    vectorize list of strings.

    Parameters
    ----------
    strings: List[str]
    method : str, optional (default='first')
        Vectorization layer supported. Allowed values:

        * ``'last'`` - vector from last sequence.
        * ``'first'`` - vector from first sequence.
        * ``'mean'`` - average vectors from all sequences.
        * ``'word'`` - average vectors based on tokens.

    Returns
    -------
    result: np.array
    """

Sentence level

[8]:
texts = [string, another_string, string1, string2]
r = quantized_model.vectorize(texts, method = 'first')
[9]:
from sklearn.manifold import TSNE
import matplotlib.pyplot as plt

tsne = TSNE().fit_transform(r)
tsne.shape
[9]:
(4, 2)
[11]:
plt.figure(figsize = (7, 7))
plt.scatter(tsne[:, 0], tsne[:, 1])
labels = texts
for label, x, y in zip(
    labels, tsne[:, 0], tsne[:, 1]
):
    label = (
        '%s, %.3f' % (label[0], label[1])
        if isinstance(label, list)
        else label
    )
    plt.annotate(
        label,
        xy = (x, y),
        xytext = (0, 0),
        textcoords = 'offset points',
    )
_images/load-toxic_31_0.png

Word level

[17]:
r = quantized_model.vectorize(texts, method = 'word')
[18]:
x, y = [], []
for row in r:
    x.extend([i[0] for i in row])
    y.extend([i[1] for i in row])
[19]:
tsne = TSNE().fit_transform(y)
tsne.shape
[19]:
(107, 2)
[20]:
plt.figure(figsize = (7, 7))
plt.scatter(tsne[:, 0], tsne[:, 1])
labels = x
for label, x, y in zip(
    labels, tsne[:, 0], tsne[:, 1]
):
    label = (
        '%s, %.3f' % (label[0], label[1])
        if isinstance(label, list)
        else label
    )
    plt.annotate(
        label,
        xy = (x, y),
        xytext = (0, 0),
        textcoords = 'offset points',
    )
_images/load-toxic_36_0.png

Pretty good, outliers are toxic words.

Stacking models

More information, you can read at https://malaya.readthedocs.io/en/latest/Stack.html

[16]:
albert = malaya.toxicity.transformer(model = 'albert')
INFO:tensorflow:loading sentence piece model
[18]:
malaya.stack.predict_stack([model, albert], [another_string])
[18]:
[{'severe toxic': 0.9968317,
  'obscene': 0.43022493,
  'identity attack': 0.90531594,
  'insult': 0.42289576,
  'threat': 0.0058603976,
  'asian': 0.000983668,
  'atheist': 0.0005495089,
  'bisexual': 0.0009623809,
  'buddhist': 0.0003632398,
  'christian': 0.0018632574,
  'female': 0.006050684,
  'heterosexual': 0.0025569045,
  'indian': 0.0056869243,
  'homosexual, gay or lesbian': 0.012232827,
  'intellectual or learning disability': 0.00091394753,
  'male': 0.011594971,
  'muslim': 0.0042621437,
  'other disability': 0.00027529505,
  'other gender': 0.0010361207,
  'other race or ethnicity': 0.0012320877,
  'other religion': 0.00091365684,
  'other sexual orientation': 0.0027996385,
  'physical disability': 0.00010540871,
  'psychiatric or mental illness': 0.000815311,
  'transgender': 0.0016718076,
  'malay': 0.96644485,
  'chinese': 0.05199418}]
[ ]: