Explainable AI for Ethical Counter Speech Generation in Hate Speech Mitigation
Proceedings of the Fifteenth Language Resources and Evaluation Conference (LREC 2026)
Abstract
The proliferation of hate speech in digital communication platforms poses significant challenges to online safety and social cohesion. While automated hate speech detection systems have shown promise, their black-box nature limits user trust and understanding of AI-driven content moderation decisions. This paper presents a framework that integrates explainable AI (XAI) techniques with counter-speech generation to create transparent, ethical solutions for hate speech mitigation. Our approach combines a fine-tuned HateBERT model, with a specialized Llama 3.1-8B-Instruct model for generating empathetic counter-narratives. The system employs five distinct XAI methods: Integrated Gradients, Attention Visualization, LIME, Counterfactual Analysis, and Natural Language Explanations to provide interpretable reasoning behind both detection and response generation decisions. The integration of explainability mechanisms with counter-speech generation represents a novel contribution to ethical AI systems, fostering transparency and trust in automated hate speech mitigation while maintaining high performance standards for real-world deployment.