dc.description.abstract |
"Hate speech has become a big concern in recent years, whether online or offline it’s destructive to many communities and societies. The unchecked circulation of hateful content can desensitize societies to absurd stereotypes, allowing greater prejudice to become normalized.
Some significant advances have been made in the automatic identification of hate speech on social media platforms. Many studies have used various strategies to address the issue of hate speech. Transformer-based models, such as BERT, have lately been discovered to be even more accurate when it comes to detecting hatred, owing to their strong resilient design capable of processing data bidirectionally on enormous volumes of data. Though BERT models can generate highly accurate findings, they are far from ideal for usage in real-world applications. One such shortcoming is such models' failure to handle topic bias when identifying hate especially gender bias.
A recent study in the field of NLP identified a novel method capable of reducing gender bias by applying the CDS algorithm in the GAP dataset and use that for fine-tuning BERT model to reduce gender bias. This technique appears promising, and it opens the possibility of using this algorithm while training BERT-based models to identify hate and determining whether the gender bias problem can be mitigated" |
en_US |