Abstract: Toxicity is still a big problem in the games industry. The development is rather simple, but still painful. What could we do? Toxicity and hate speech remain constant challenges in various online communities, including live-streaming platforms such as Twitch. In this paper, we explore a lightweight, Pythonbased chat bot that integrates Twitch's IRC interface with the Google Perspective API to assess the toxicity of live chat messages in real time. The tool is designed for streamers seeking a minimaleffort moderation assistant and includes a graphical interface for monitoring per-channel toxicity metrics. To evaluate the system's effectiveness, we conducted an offline analysis using a toxic comment dataset. Our results show that the API can broadly distinguish toxic from non-toxic messages, achieving a mean toxicity score of 0.73 for labeled toxic content and 0.11 for benign content. However, limitations such as domain mismatch, API rate restrictions, and lack of contextual understanding constrain real-world applicability of the integration. We discuss practical implications, ethical concerns, and paths for future work, including domain-specific dataset collection, threshold optimization, and user-centered evaluation. The findings highlight both the opportunities and limitations of integrating third-party AI services into real-time moderation workflows on platforms like Twitch.
External IDs:doi:10.1109/cog64752.2025.11114410
Loading