TITLE:
Enhancing User Security on Instagram: A Multifaceted AI System for Filtering Abusive Comments
AUTHORS:
Ahlam Oudah Alhwiti, Mohammad A. Mezher
KEYWORDS:
Instagramposts, Negative Comments, Education, Emotions, Social Media, Digital Abuse, Emotional Needs
JOURNAL NAME:
Social Networking,
Vol.13 No.2,
April
30,
2024
ABSTRACT: Social media platforms like Instagram have increasingly become venues for online abuse and offensive comments. This study aimed to enhance user security to create a safe online environment by eliminating hate speech and abusive language. The proposed system employed a multifaceted approach to comment filtering, incorporating the multi-level filter theory. This involved developing a comprehensive list of words representing various types of offensive language, from slang to explicit abuse. Machine learning models were trained to identify abusive messages through sentiment analysis and contextual understanding. The system categorized comments as positive, negative, or abusive using sentiment analysis algorithms. Employing AI technology, it created a dynamic filtering mechanism that adapted to evolving online language and abusive behavior. Integrated with Instagram while adhering to ethical data collection principles, the platform sought to promote a clean and positive user experience, encouraging users to focus on non-abusive communication. Our machine-learned models, trained on a cleaned Arabic language dataset, demonstrated promising accuracy (75.8%) in classifying Arabic comments, potentially reducing abusive content significantly. This advancement aimed to provide users with a clean and positive online experience.