Cyber Sentinel: Monitoring and Rating Online Behavior to Combat Cyber Bullying
Keywords:
Convolutional Neural Networks (CNN), Cyberbullying, Electroencephalography (EEG), Machine Learning Algorithms, Natural Language Processing (NLP)Abstract
The proliferation of hate speech on social media platforms such as YouTube, Facebook, and Twitter pose a significant threat to societal harmony and disproportionately impacts marginalized communities. To address this problem, we provide a robust machine learning model that combines Convolutional Neural Networks (CNN) and Natural Language Processing (NLP) to identify bullying behavior on Twitter. Textual analysis has been the mainstay of traditional methods for identifying bullying on social media. While this approach has its benefits, it fails to consider the complex nature of online communication, which frequently involves visuals. Our approach integrates NLP to analyze the textual content of tweets, identifying potentially harmful language patterns and phrases indicative of bullying. Simultaneously, CNNs are employed to analyze images associated with tweets, identifying visual elements that may contribute to bullying. The NLP component of our model is designed to process and understand the nuances of natural language in tweets. This involves tokenization, sentiment analysis, and semantic similarity measures to classify tweets as bullying or non-bullying accurately. The CNN component learns to identify visual cues that may be signs of bullying behavior using a dataset of photos connected to bullying. Using the Twitter API, we continuously fetch tweets in real-time, ensuring our model remains up-to-date with the latest language and visual trends. The combined use of NLP and CNN enhances the model's accuracy in detecting true positives, thus providing a comprehensive solution to identify and mitigate social media bullying. Our project demonstrates the effectiveness of integrating textual and visual data analysis to detect bullying on social media platforms more accurately. Implementing this model could make social media environments safer and more inclusive, protecting users from the harmful impacts of online bullying.